Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, Bluesky and Post):
- Shut down criticism by threatening people's future payouts. I don't think applying lessons from the Marquis de Sade School of Management is a good way to produce reliable AI: https://futurism.com/the-byte/sam-altman-nda-superintelligent
- Looks like if you find a vulnerability in an AI, there's no way to actually report it: https://www.theregister.com/2024/05/23/ai_untested_unstable/
- Quality, copyright and ethics are the three reasons cited for banning AI generated code from open source projects: https://www.tomshardware.com/software/linux/linux-distros-ban-tainted-ai-generated-code
- You won't be able to keep AI out of your business, it's going to sneak in whatever you do: https://www.computerworld.com/article/2108511/trying-to-keep-ai-from-sneaking-into-your-environment-good-luck.html
- OpenAI now has an agreement in place to access Newscorp's material for training its generative AI. After Newscorp sued them. Lawsuit as negotiating tactic? https://dataconomy.com/2024/05/23/the-news-corp-openai-deal/
- Code generated by current AI will be less effective than human-written code, as current AI have no understanding of what they are writing: https://futurism.com/the-byte/study-chatgpt-answers-wrong
- I was skeptical about the viability of these devices. If you want AI on your person, why can't a smartphone do it? https://spectrum.ieee.org/ai-gadgets-2024
- My unpopular opinion is that large-scale AI requires the use of large-scale nuclear power. Anything else is just going to pump up carbon emissions: https://futurism.com/the-byte/microsoft-emissions-ai-datacenters
- How close does an AI generated voice have to be to someone's natural voice that it causes legal problems? One thing's certain, the real winners of the AI bubble are going to be the lawyers: https://www.informationweek.com/machine-learning-ai/scarlett-johansson-openai-and-silencing-sky-
- Microsoft launches its AI studio: https://www.theregister.com/2024/05/21/microsoft_ai_studio_opens_for/
- Ethical issues around using generative AI: https://dataconomy.com/2024/05/22/what-are-some-ethical-considerations-when-using-generative-ai/
- The Turing Test was never intended as a serious test for machine intelligence. We do need proper tests for this now, though: https://futurism.com/the-byte/gpt-4-passed-turing-test
- Combining bottom-up learning with a near symbolic approach gives AI better reasoning capabilities: https://www.livescience.com/technology/artificial-intelligence/mit-gives-ai-the-power-to-reason-like-humans-by-creating-hybrid-architecture
- I suppose the advantage of spending your time producing propaganda is that reality is not much of a concern for you: https://www.stuff.co.nz/world-news/350286471/these-isis-news-anchors-are-ai-fakes-their-propaganda-real
- China is developing its own large language model AI: https://www.nature.com/articles/d41586-024-01495-6
- Not so much discrimination, as a predictable outcome of sampling from a small population. A problem for all AI that deal with people: https://www.nzherald.co.nz/kahu/maori-and-pasifika-face-discrimination-over-polices-use-of-biometric-data-expert/SWYY3RNDHZBX3NBLVWX5FGQWHQ/
- Using AI to fake interviews is NOT OK: https://www.stuff.co.nz/sport/350287905/michael-schumachers-family-wins-compensation-fake-ai-interview-german-magazine
- So OpenAI has just given up on the idea of guarding against the dangers of super-intelligent AI? Doesn't look good, but I've always thought these dangers were over-blown: https://futurism.com/the-byte/openai-researcher-quits-criticism-superallignment
- Using AI software developed for trains to protect satellites: https://www.theregister.com/2024/05/24/jaxa_enlists_railway_ai_maintenance/
- Slop-AI generated material on the web intended to drive traffic, generating advertising revenue: https://www.theguardian.com/technology/article/2024/may/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet
- So Microsoft's AI will monitor and record everything you do on your computer? Why does that make me very, very nervous? https://spectrum.ieee.org/microsoft-copilot
- AI is not making search better: https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinations-glue-in-pizza
- Google's AI also has no understanding of what it's saying, so it comes out with some really stupid (glue on pizza) or dangerous (all the things you shouldn't do when bitten by a snake) advice: https://techcrunch.com/2024/05/23/using-memes-social-media-users-have-become-red-teams-for-half-baked-ai-features/
- Do AI really understand people, or are they just parroting responses that make it look like they do? https://spectrum.ieee.org/theory-of-mind-ai
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.