Some interesting links that I Tweeted about in the last week (I also post these on Mastodon and Threads):
- Using software to detect inappropriately duplicated images in journal articles: https://www.nature.com/articles/d41586-023-02920-y It never ceases to amaze me how many authors think they can get away with shonky stuff like this.
- While an AI might tell you that a hot stove will melt eggs, they really don't: https://arstechnica.com/information-technology/2023/09/can-you-melt-eggs-quoras-ai-says-yes-and-google-is-sharing-the-result/
- Multi-modal inputs for ChatGPT: https://spectrum.ieee.org/chatgpt-multimodal
- An overview of hierarchical clustering: https://www.kdnuggets.com/unveiling-hidden-patterns-an-introduction-to-hierarchical-clustering I've always preferred optimisation clustering myself.
- I don't think it's a good idea to try to cling onto the dead like this. We miss our loved ones who are gone, but we have to let them go to move on with our own lives: https://www.technologyreview.com/2022/10/18/1061320/digital-clones-of-dead-people/
- The fear with releasing the weight values of an AI is that it can be retrained to bypass safeguards. But retraining these models is a very fraught process that can easily break them: https://spectrum.ieee.org/meta-ai
- Of all the potential problems with AI, having them become your perfect friend is the scariest: https://dataconomy.com/2023/10/05/hiwaifu-ai-wants-to-be-your-digital-best-friend/
- So they've put a slightly improved AI into an existing robot dog? https://interestingengineering.com/innovation/stanford-introduces-autonomous-robot-dogs-with-ai-brains
- Is OpenAI going to make its own custom chips? https://www.nextplatform.com/2023/10/06/openai-to-join-the-custom-ai-chip-club/
- How AI can be used to threaten freedom: https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/
- Younger people, and men, have more trust in AI than do older people and women: https://www.computerworld.com/article/3707795/ai-trust-gap-based-on-gender-age.html
- More on the overall reluctance to use AI in business: https://www.informationweek.com/machine-learning-ai/ai-and-overcoming-user-resistance
- The push-back against AI from creative industries: https://techcrunch.com/2023/10/06/creatives-across-industries-are-strategizing-together-around-ai-concerns/
- The challenges facing and concerns held by organisations regarding generative AI: https://www.techrepublic.com/article/it-survey-challenges-solutions-generative-ai-adoption/
- So even if we did require generative AI to embed watermarks into the material they produce, such watermarks are easy to remove: https://www.extremetech.com/computing/ai-watermarks-are-too-easy-to-remove-researchers-show
- An overview of ChaptGPT Vision: https://dataconomy.com/2023/10/04/chatgpt-vision-is-insanely-good-here-is-what-it-can-and-cant-do/
- No, ChaptGPT and other AI does not think. A parrot can see, hear and speak but if it asks you if you want a cup of tea, it doesn't know to put the kettle on: https://www.datasciencecentral.com/generative-ai-megatrends-chatgpt-can-see-hear-and-speak-but-what-does-it-mean-when-chatgpt-can-think/
- Using AI to find drug candidates for protein misfolding diseases like Alzheimer's: https://www.datanami.com/2023/10/06/new-ai-technique-helps-find-alzheimers-drug-targets/
- Personally, I think we're still a long way from Artificial General Intelligence, mainly because specialised AI is more immediately useful: https://www.kdnuggets.com/how-close-are-we-to-agi
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.