Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, and Bluesky):
- How generative AI can be applied to biomedical research: https://www.nature.com/articles/d42473-023-00458-1
- Perhaps this tool is 99.9% accurate at detecting AI generated writing because it classifies everything as AI-generated? https://futurism.com/the-byte/openai-software-detects-ai-writing
- Skyrocketing tuition fees isn't just a problem in the USA. Students forgoing electives because of cost will cost them more long term, many people have found their true passions through electives: https://slate.com/human-interest/2024/04/yearly-tuition-hike-colleges-universities-yale-boston.html
- Working in AI is inherently multidisciplinary: https://www.businessinsider.com/gen-z-data-scientist-debunks-ai-job-myths-2024-8
- The real danger with AI is that it could train people to treat others like crap: https://www.popsci.com/technology/openai-jerks/
- I never expected this product to do well, just having AI in something doesn't mean it's actually useful: https://www.theverge.com/2024/8/7/24211339/humane-ai-pin-more-daily-returns-than-sales
- Are the recent departures from OpenAI a sign that the AI bubble is about to burst? https://futurism.com/openai-prominent-employees-leaving
- A new neural network architecture for learning functions: https://spectrum.ieee.org/kan-neural-network
- The timeline of Tesla's Dojo, a supercomputer for training AI: https://techcrunch.com/2024/08/10/teslas-dojo-a-timeline/
- Five things to know about the EU AI law: https://www.datanami.com/2024/08/07/five-questions-as-the-eu-ai-act-goes-into-effect/
- Dell cuts huge numbers of jobs, claims something something #AI something https://www.theregister.com/2024/08/06/dell_layoffs/
- The overall safety and security of generative AI: https://www.informationweek.com/machine-learning-ai/how-safe-and-secure-is-genai-really-
- It seems that, like other kinds of generative AI, GPT-4o also hallucinates, only this time it's an audio hallucination: https://techcrunch.com/2024/08/08/openai-finds-that-gpt-4o-does-some-truly-bizarre-stuff-sometimes/
- A strategy for integrated AI: https://www.datasciencecentral.com/the-recipe-for-achieving-aspirational-ai/
- Yet another way that AI can contribute to climate change: https://www.theregister.com/2024/08/11/pipeline_operators_ai_demand/
- Some ways to use GPT-4o in Python coding: https://www.kdnuggets.com/3-ways-of-building-python-projects-using-gpt-4o
- Training an organisation to prepare for AI: https://www.computerworld.com/article/3484270/how-to-train-an-ai-enabled-workforce-and-why-you-need-to.html
- The reality is most AI is not making money: https://www.informationweek.com/machine-learning-ai/did-genai-expectations-just-crash-into-cold-economic-reality-
- More and more companies are going to be resorting to legally questionable means to gather the data needed for large-scale AI: https://www.computerworld.com/article/3483812/nvidia-reportedly-trained-ai-models-on-youtube-data.html
- The legal uncertainty around testing large language model AI: https://www.theregister.com/2024/08/08/lawyers_say_us_cybersecurity_law/
- Of course tech companies don't trust government regulation of AI. Tech companies don't like regulation on anything: https://www.datanami.com/2024/08/08/collibra-survey-us-tech-executives-dont-trust-u-s-government-approach-to-ai-regulation/
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.