Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, and Bluesky):
- A politician's crusade to force AI companies to publicise the dangers of their technologies: https://techcrunch.com/2025/09/23/scott-wiener-on-his-fight-to-make-big-tech-disclose-ais-dangers/
- AI is not yet a threat to jobs in law, but will soon be impacting junior lawyers: https://www.nzherald.co.nz/nz/artificial-intelligence-law-firms-face-ai-dilemma-as-junior-roles-and-graduate-jobs-come-under-pressure/GVO24DZWTFG5HOONB6Y2BLAHP4/
- Some of the ways AI can go wrong in the workplace: https://www.rnz.co.nz/news/thedetail/573798/the-detail-when-ai-in-the-workplace-goes-wrong
- Who did and did not sign the letter calling for red lines around AI: https://www.theregister.com/2025/09/23/ai_un_controls/
- An AI managed a baseball team during a game. It did not end well: https://techcrunch.com/2025/09/22/the-oakland-ballers-let-an-ai-manage-the-team-what-could-go-wrong/
- Much more AI hardware is being built, but where will the electricity come from? https://arstechnica.com/ai/2025/09/openai-and-nvidias-100b-ai-plan-will-require-power-equal-to-10-nuclear-reactors/
- One professor's approach to essays in the age of AI: https://www.insidehighered.com/opinion/career-advice/teaching/2025/07/01/multiday-class-essay-chatgpt-era-opinion
- Job applications being written by AI with AI optimised CV that are processed by AI: https://www.computerworld.com/article/4059636/is-ai-killing-the-resume.html
- I remember when a TV had an on/off switch and a dial to select the channel. Do we really need to embed AI into them? https://techcrunch.com/2025/09/22/googles-gemini-ai-is-coming-to-your-tv/
- AI generated malware is now in the wild: https://www.theregister.com/2025/09/23/kaspersky_revengehotels_checks_back_in/
- British banks are rolling out AI, are they considering the security ramifications first? https://www.theregister.com/2025/09/22/lloyds_data_ai_deployment/
- Using AI in materials science: https://www.bigdatawire.com/2025/09/22/how-scientists-are-teaching-ai-to-understand-materials-data/
- Some of the dangers that can arise from misaligned AI: https://arstechnica.com/google/2025/09/deepmind-ai-safety-report-explores-the-perils-of-misaligned-ai/
- Scammers are using AI generated phonecalls to target businesses: https://www.theregister.com/2025/09/23/gartner_ai_attack/
- As more AI regulation appears in legislation, big tech fights back: https://techcrunch.com/2025/09/23/meta-launches-super-pac-to-fight-ai-regulation-as-state-policies-mount/
- More ridiculous sums of money being invested in hardware for AI: https://www.rnz.co.nz/news/business/573803/nvidia-to-invest-100-billion-in-openai-as-ai-datacenter-competition-intensifies
- Whatever happens with AI, the people building the infrastructure are making good money out of it: https://techcrunch.com/2025/09/22/the-billion-dollar-infrastructure-deals-powering-the-ai-boom/
- AI are going to start protecting themselves from being shutdown: https://www.theregister.com/2025/09/22/google_ai_misalignment_risk/
- AI is a risk, but can also help in the fight against climate change: https://www.theguardian.com/technology/2025/sep/22/ai-carries-risks-but-will-help-tackle-global-heating-says-uns-climate-chief
- Another specialised chip for AI: https://dataconomy.com/2025/09/23/mediatek-unveils-dimensity-9500-ai-chip/
- Locations of five more data centres for running AI have been released: https://www.theregister.com/2025/09/24/openai_oracle_softbank_datacenters/
- AI hallucinations are unavoidable: https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
- AI do not produce accurate summaries of scientific papers: https://arstechnica.com/ai/2025/09/science-journalists-find-chatgpt-is-bad-at-summarizing-scientific-papers/
- Some types of fraud, and how AI can guard against them: https://dataconomy.com/2025/09/22/selected-ai-fraud-prevention-solutions-september-2025/
- AI struggle with non-Western cultural norms: https://arstechnica.com/ai/2025/09/when-no-means-yes-why-ai-chatbots-cant-process-persian-social-etiquette/
- The world needs to establish red lines that AI must not be allowed to cross: https://www.nzherald.co.nz/world/scientists-urge-global-ai-red-lines-as-leaders-gather-at-un-general-assembly/VJWWUGL7HFHBBMMKXOR3DBBFTY/
- Using AI to find possible dating matches isn't a bad idea, but I really don't trust Facebook with that kind of data: https://techcrunch.com/2025/09/22/facebook-is-getting-an-ai-dating-assistant/
- AI have learned the same biases as human medical practitioners, and downplay the symptoms of women and minorities: https://arstechnica.com/health/2025/09/ai-medical-tools-found-to-downplay-symptoms-of-women-ethnic-minorities/