Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, and Bluesky):
- Large-scale AI needs large-scale nuclear power, but there are regulatory issues with getting the electricity from the power plants to the data centres: https://spectrum.ieee.org/amazon-data-center-nuclear-power
- Is the collective noun of AI driven taxis a "Gaggle" or a "Honk"? https://futurism.com/the-byte/robotaxis-gather-honk-all-night
- As generative AI gets better and better, how long before these AI-generated stories become impossible to detect? https://www.huffpost.com/entry/cody-enterprise-reporter-resigns-after-using-artificial-intelligence_n_66bcc610e4b03da4fc01b2d3
- Orwell got it wrong, it's not Big Brother who's watching you, it's AI: https://www.leightonassociates.co.nz/post/ai-is-watching-you-all-shift-long
- Using AI to decode the signals from a brain implant, to allow people to communicate again: https://dataconomy.com/2024/08/15/ai-helps-als-patient-speak-again/
- Blaming remote work for Google falling behind in AI is like blaming a soldier's boots for losing a battle - ultimately, responsibility lies with the leaders: https://www.theregister.com/2024/08/15/googles_exceo_steps_back_from/
- AI is like any other product, if it doesn't function, or isn't useful, then it's pointless: https://futurism.com/the-byte/google-demo-gemini-ai-tech-fail
- This technique might slightly reduce AI hallucinations, but it doesn't solve the underlying problem, which is an AI has no concept of what the words it spits out actually represent: https://www.computerworld.com/article/3487262/researchers-tackle-ai-fact-checking-failures-with-new-llm-training-technique.html
- You can't just add more AI to try to correct an AI that is broken because of bad data: https://www.computerworld.com/article/3487242/agentic-rag-ai-more-marketing-hype-than-tech-advance.html
- How to get started in a position training AI: https://dataconomy.com/2024/08/15/get-started-with-ai-training-jobs/
- This newspaper doesn't even try to hide the fact its articles are generated by AI: https://futurism.com/entirely-ai-generated-news-site
- A new licensing deal means that actors will get a say in what their AI clones say and do: https://www.theregister.com/2024/08/15/actors_union_ai_voice_clone/
- I wonder how many of these AI generated job applications are being filtered by AI? https://futurism.com/the-byte/recruiters-ai-generated-job-cvs
- How AI can automate the process of developing a brain connectome: https://www.extremetech.com/science/mit-scientists-detangle-the-brain-with-new-open-source-ai
- Replacing writers with AI without telling the editors - the long, slow decline towards AI generated mediocrity continues: https://www.theregister.com/2024/08/15/robot_took_my_job/
- More AI-based interference in elections, this time by Iran: https://www.theverge.com/2024/8/16/24221982/openai-iranian-chatgpt-accounts-banned-chatgpt-us-election
- Achieving general AI is not a matter to throwing more and more processing power at the problem, there needs to be a fundamental shift in the model used: https://futurism.com/the-byte/agi-supercomputer
- When an AI can change the parameters of its own execution, we need strong guardrails: https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/
- The AI bubble seems to be following the same trajectory as the Internet bubble of the late 90s. I expect what will happen is a lot of AI companies will go under, and the ones that provide useful services will survive: https://www.datanami.com/2024/08/14/is-the-genai-bubble-finally-popping/
- AI technology will get to the point that it is useful in high school level teaching, but we're not there yet: https://futurism.com/high-school-starts-replacing-teachers-ai
- I don't think it's a good idea for companies to be driving changes to laws, especially laws that are supposed to protect the public from the problems that AI can cause: https://www.theregister.com/2024/08/16/california_ai_safety_bill/
- This approach of stealing all the data they can get and training their AI on it has to stop, it's legally questionable and does not produce good AI: https://www.theverge.com/2024/8/14/24220658/google-eric-schmidt-stanford-talk-ai-startups-openai
- This is the kind of AI that is likely to survive the AI bubble bursting - it's fairly simple, straightforward, and useful: https://www.computerworld.com/article/3485725/ai-and-ar-can-supercharge-ambient-computing.html
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.