Friday, September 29, 2023

Weekly Review 29 September 2023

Some interesting links that I Tweeted about in the last week (I also post these on Mastodon and Threads):


  1. I don't think AI like ChatGPT are creative, I think they are mashing together the creativity of a huge number of humans: https://spectrum.ieee.org/ai-better-improvisor-than-most
  2. The business risks of AI: https://www.informationweek.com/machine-learning-ai/dialing-down-ai-risks
  3. Big tech still seems to be resistant to the idea of regulation of AI: https://www.technologyreview.com/2023/05/16/1073167/how-do-you-solve-a-problem-like-out-of-control-ai/
  4. Another copyright suit on the data used to train AI: https://www.theregister.com/2023/09/21/authors_guild_openai_lawsuit/
  5. I think we are going to see AI services relocating to countries with fewer or lighter regulation: https://spectrum.ieee.org/ai-regulation-worldwide
  6. Predicting diseases caused by mutations with Deep Learning: https://www.technologyreview.com/2023/09/19/1079871/deepmind-alphamissense-ai-pinpoint-causes-genetic-disease/ But they are keeping the model secret.
  7. So far, I'm not convinced by Turnitin's AI detection abilities: https://www.theregister.com/2023/09/23/turnitin_ai_detection/ It's great for detecting plagiarism, though.
  8. The two copyright issues with AI generated material seem to be 1) who owns the copyright of what is generated, and 2) the copyright of the material used to train the models: https://www.computerworld.com/article/3707348/generative-ai-and-us-copyright-law-are-on-a-collision-course.html
  9. More on why businesses need to be careful using generative AI: https://www.theregister.com/2023/09/22/datagrail_generative_ai/
  10. Like most technologies, AI is developing much faster than governmental regulation can keep up: https://www.theguardian.com/technology/2023/sep/22/ai-developing-too-fast-for-regulators-to-keep-up-oliver-dowden
  11. Free to use alternatives to ChatGPT: https://www.kdnuggets.com/top-5-free-alternatives-to-gpt4
  12. The AI Incident Database is tracking incidents of harm caused by AI: https://www.informationweek.com/machine-learning-ai/weighing-the-ai-threat-by-incident-reports Most are caused by the way people use the AI, rather than the AI itself.
  13. I don't think restricting authors to publishing three ebooks a day is going to solve the problem of AI generated books: https://www.theregister.com/2023/09/22/amazon_ai_book_publishing_limit/ It would probably be more helpful to verify authors' identity.
  14. Using AI to diagnose problems with people's gait: https://techcrunch.com/2023/09/21/plantiga-technologies-ai-powered-footwear-sensor-pod-aims-to-reduce-injury-risks/
  15. More on the copyright issues around generative AI: https://techcrunch.com/2023/09/21/the-copyright-issues-around-generative-ai-arent-going-away-anytime-soon/
  16. Oracle is integrating the handling of vectors-a key part of some AI models-into their databases: https://www.datanami.com/2023/09/22/oracle-introduces-integrated-vector-database-for-generative-ai/
  17. The difference between generative and traditional AI: https://www.kdnuggets.com/traditional-ai-vs-generative-ai
  18. A comparison of DALLE-E and Midjourney: https://dataconomy.com/2023/09/22/comparison-dall-e-3-vs-midjourney/
  19. How AI can improve augmented reality: https://www.informationweek.com/machine-learning-ai/how-artificial-intelligence-could-boost-artificial-reality
  20. The AI aspect of this kind of software is things like gaze tracking, but I would never have used this when I was a manager. I cared about results, not how many keystrokes an employee used: https://www.stuff.co.nz/business/132945922/stay-away-from-ai-bossware-that-monitors-staff--salesforce-ethics-expert


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.