Friday, December 15, 2023

Weekly Review 15 December 2023

Some interesting links that I Tweeted about in the last week (I also post these on MastodonThreadsNewsmast and Post): 

  1. I remember playing the first, freeware episode of Doom all the way through in one evening, with a broken finger. That's how addictive Doom was, and it launched 30 years ago:
  2. A technique for visualising where neural networks make mistakes:
  3. It seems to me that it's not so much that AI is creating a crisis in science, as that the people using the algorithms are not sufficiently knowledgeable of them to avoid their problems:
  4. If you are using any social network, you are probably providing free training data for their AI:
  5. It might be plagiarism free (it isn't, it's trained on others' work) but it's still cheating for a student to use an AI to write their essays:
  6. Despite the widespread use of generative AI, students still need to learn how to write:
  7. Honestly, I'm glad they are breaking up Te Pukenga, combining all of the polytechs under one structure was a pretty dumb idea. But I would have thought that the new government would at least have some idea of what to replace it with:
  8. Since generative AI has only really been widely available for the last year, even 10% of organisations launching software using it is quite a lot. Investing in employee skills and knowledge will help other organisations to catch up:
  9. Code generated by AI can make developers more productive-or at least feel that they are-but the generated code has some real security problems:
  10. I honestly thought Purple Llama was a satire, rather than a set of tools for safety in AI:
  11. Big AI requires big infrastructure, and that means that only the big tech companies can do it. And quite a few of them are using your data to build their models:
  12. How machine learning has helped to identify "parts of speech" in whale song:
  13. A billion lines of Copilot-suggested code is now in the wild. With the security problems that AI generated code contains, this is a bit concerning:
  14. Experimental evidence that large language model AI encapsulate human biases:
  15. How AI could actually make the world a better place:
  16. Using generated images to train classifiers gives less biased models:
  17. So Google faked parts of the demo of Gemini. How many other AI companies have also faked demos, as other tech companies do? 
  18. Now Amazon is launching its own AI assistant:
  19. Another article about AI that reads like it was written by one, this time on forecasting in business:
  20. "Most people will have to wait for the full experience" of Google Gemini. Hopefully by then, they will have fixed the parts they had to fake in the demo:
  21. Apple released their own machine learning toolkit:
  22. The European Union agrees to make laws regulating AI:
  23. More on the EU rules around AI:
  24. How AI can contribute to business process improvement:
  25. How to build personalised ChatGPT:
  26. ChatGPT at the application layer of technology stacks:

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.