Friday, November 10, 2023

Weekly Review 10 November 2023

Some interesting links that I Tweeted about in the last week (I also post these on MastodonThreads and Newsmast): 

  1. A detailed piece on Biden's executive order on AI:
  2. More details on Biden's executive order on AI:
  3. The history of ChaptGPT:
  4. Now Microsoft wants to use AI to boost the security of its products:
  5. Italy's chief of AI is 85 years old (not necessarily a bad thing) and has no expertise on AI (that is bad):
  6. Scientists and other experts need to be free to communicate information to the community without fear of being targeted by conspiracy theory loonies. What Siouxsie Wiles has gone through is entirely unacceptable:
  7. How university librarians will need to adapt to AI:
  8. I'm not optimistic that the incoming government is either motivated to or capable of making New Zealand universities world-class. Tertiary education has been treated as a political football for decades, to the detriment of the country:
  9. Should the big companies developing AI be more responsible for the harm their technology creates?
  10. An AI assisted laundromat, in small-town New Zealand:
  11. While the headline claims that an AI was used to modify the photos, could it have been a more basic image manipulation? In either case, another potential misuse of AI technologies:
  12. Generative AI uses a lot of energy. While models need to be made more efficient, I'd rather that energy be used for training machine learning models than mining cryptocurrency:
  13. "DeepSteaks" - an amusing play on words, with a serious message about how AI can be used to produce misinformation:
  14. Using people's likenesses without their permission is another problem with generative AI:
  15. It's not surprising that actors have an issue with the use of generative AI in Hollywood, it's now trivial to produce video of anyone doing anything, without their knowledge:
  16. And of course the big AI companies have plenty of arguments for not paying the people whose material they used to train their models: How about sharing some of those billions with the original creators?
  17. There's a lack of people with the skills to write effective prompts for generative AI. So should companies just recruit or train in-house?
  18. Generative AI doesn't have understanding of what it is producing, so of course it lacks any idea of things like what it is appropriate to poll readers about:
  19. A comprehensive guide to using AI art generators:
  20. I largely changed research fields for my postdoc career, although I still spent most of my time developing algorithms and software, just in a different context. These tips would have been helpful to me at the time:
  21. I'm not sure that having an AI enabled pin/badge is really more convenient than a smartphone - a lot of people wouldn't be caught dead without their phone:
  22. Continuous machine learning with nanowire chips. I bet these would be really low power, as well:
  23. A custom large language model AI for assisting chip engineers:
  24. Six myths about #AI debunked:
  25. How mixing #AI technologies can make large language models better:
  26. Five things that came out of the UK government's AI summit:
  27. My concern is that different governments have different ideas about what "unsafe" means for AI:
  28. Sunak's AI summit is a start, but I expect all the good intentions are going to get lost under a tsunami of self-interest:
  29. The use and abuse of AI is just another example of why it was a bad idea to let corporations grow to be as powerful as governments:

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.