Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, and Bluesky):
- If people keep saying they've achieved general AI, eventually it might be true: https://futurism.com/openai-employee-claims-agi
- Looks like OpenAI still hasn't learned about using copyrighted data to train its AI: https://www.extremetech.com/gaming/openai-appears-to-have-trained-sora-on-game-content
- Microsoft wants to train one million people in Australia and New Zealand on AI skills: https://www.techrepublic.com/article/microsoft-ai-program-upskill-anz-boost-economy/
- How AI is improving agriculture in India: https://spectrum.ieee.org/ai-agriculture
- Organisations don't need Chief AI Officers: https://www.bigdatawire.com/2024/12/12/why-you-dont-need-a-chief-ai-officer-now-or-likely-ever-heres-what-to-do-instead/
- Using automated reasoning to monitor AI: https://www.bigdatawire.com/2024/12/05/amazon-taps-automated-reasoning-to-safeguard-critical-ai-systems/
- A one million book dataset for training AI: https://dataconomy.com/2024/12/13/google-and-harvard-drop-1-million-books-to-train-ai-models/
- Bug reports generated by AI are harmful to open source projects: https://gizmodo.com/bogus-ai-generated-bug-reports-are-driving-open-source-developers-nuts-2000536711
- Extending large language model AI to allow for non-verbal reasoning: https://arstechnica.com/ai/2024/12/are-llms-capable-of-non-verbal-reasoning/
- I think police should be writing their own reports, not leaving it to AI, especially they're disabling safeguards: https://www.theregister.com/2024/12/12/aclu_ai_police_report/
- An actual AI researcher and expert writes about future directions of AI: https://ieeexplore.ieee.org/document/10794556
- Another fossil fuel powerplant being built solely to power AI: https://techcrunch.com/2024/12/13/exxon-cant-resist-the-ai-power-gold-rush/
- Of course super-intelligent AI will be unpredictable. Humans are unpredictable, why would AI built by humans be any different? https://techcrunch.com/2024/12/13/openai-co-founder-ilya-sutskever-believes-superintelligent-ai-will-be-unpredictable/
- AI are already showing signs of self-preserving behaviour: https://futurism.com/the-byte/openai-o1-self-preservation
- The problems with adopting AI and how to deal with them: https://www.kdnuggets.com/overcoming-ai-implementation-challenges-lessons-early-adopters
- The different layers and models of AI explained: https://www.extremetech.com/extreme/333143-what-is-artificial-intelligence
- Before a developer embeds an AI in their software, they should know how that AI works: https://www.informationweek.com/software-services/what-developers-should-know-about-embedded-ai
- Not a textbook written by AI, but a textbook that is AI: https://www.insidehighered.com/news/faculty-issues/learning-assessment/2024/12/13/ai-assisted-textbook-ucla-has-some-academics
- Early career workers are the most concerned about AI in the workplace: https://www.computerworld.com/article/3619976/ai-in-the-workplace-is-forcing-younger-tech-workers-to-rethink-their-career-paths.html
- We have reached peak data, and that's causing AI projects to fail: https://blocksandfiles.com/2024/12/13/hitachi-vantara-ai-strategies/
- American whistleblowers seem to have a habit of dying, even ones from the AI industry: https://techcrunch.com/2024/12/13/openai-whistleblower-found-dead-in-san-francisco-apartment/
- Large-scale AI need nuclear power, nothing else can satisfy the energy demands of modern algorithms: https://spectrum.ieee.org/nuclear-powered-data-center
- A startup that is working to counter AI-generated misinformation: https://techcrunch.com/2024/12/13/as-ai-fueled-disinformation-explodes-here-comes-the-startup-counterattack/
- How to build your organisation's internal AI talent: https://www.informationweek.com/machine-learning-ai/how-to-find-and-train-internal-ai-talent
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.