Friday, November 3, 2023

Weekly Review 3 November 2023

Some interesting links that I Tweeted about in the last week (I also post these on Mastodon and Threads):

  1. This puts the rare negative comment I get in my teaching evaluations into perspective:
  2. "Delphi has still not matched human performance at making moral judgements" - I have met a few humans who have not matched human performance at making moral judgements. Immoral humans still concern me more than immoral AI:
  3. An overview of how large language model AI works:
  4. A guide on how to write an art prompt for generative AI:
  5. So who is going to actually turn up at the UK PM's summit on AI safety?
  6. Massey university staff ask for hard data to justify the loss of jobs, data doesn't exist:
  7. I've always said that the best people to manage any industry are those who came up through it. Managerialism, decision making by those who do not know the industry, is the biggest threat to New Zealand universities:
  8. Universities losing touch with their cities doesn't help them, either:
  9. Using an AI to boost a sports team's engagement with fans:
  10. Woodpecker, a tool for "healing" hallucinating generative AI:
  11. Not content with generating product reviews with AI, Amazon is now using it to produce advertising content:
  12. A tool to find the provenance of data used in training AI:
  13. A research group used a machine learning model to predict the financially optimal times to use the electricity generated by nuclear plants to produce hydrogen:
  14. AWS named their chips Trainium and Inferentia? Did an AI come up with those names?
  15. A list of the pros and cons of the top five cloud AI and machine learning platforms:
  16. Poisoning art with Nightshade to prevent it being used in generative AI:
  17. More on the generative AI image data poisoning system Nightshade: 
  18. Unfortunately I expect it won't be long before generative AI companies close the vulnerabilities exploited by Nightshade to protect artists work. But for now it's something at least:
  19. An AI for working with research data. I really hope it doesn't hallucinate like other AI do:
  20. A young person's guide to personal data and how it's used by AI:
  21. So the US federal government is now setting rules for how its employees use AI:
  22. The five biggest AI companies do not rate well on transparency:
  23. A short history of AI:
  24. It seems to me that the hallucinations produced by generative AI come about because they have no understanding of what they are producing:
  25. More product reviews being produced by generative AI:

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.