Some interesting links that I Tweeted about in the last week (I also post these on Mastodon, Threads, Newsmast, and Bluesky):
- Economists are uncertain about the impact of AI: https://www.computerworld.com/article/4108089/a-wild-future-how-economists-are-handling-ai-uncertainty-in-forecasts.html
- Predicting health issues from sleep data using AI: https://www.rnz.co.nz/news/world/583468/ai-uses-sleep-study-data-to-accurately-predict-dozens-of-health-issues
- Yet another AI posing a security risk: https://www.theregister.com/2026/01/07/ibm_bob_vulnerability/
- Snowflake is embedding AI into its data handling tools: https://www.theregister.com/2026/01/06/snowflake_google_gemini_support/
- AI is using so much RAM that there aren't enough chips to go around: https://www.theregister.com/2026/01/06/memory_firm_profits_up_as/
- I'm still very skeptical about AI, especially generative AI, in health: https://dataconomy.com/2026/01/08/openai-launches-dedicated-chatgpt-health-space/
- Governance is the key to scaling AI in organisations: https://www.informationweek.com/machine-learning-ai/scaling-ai-value-demands-industrial-governance
- I think that only checking the first 250 prescription renewals made by AI is a bit short on quality control here. It's the edge cases where problems are going to occur: https://arstechnica.com/health/2026/01/utah-allows-ai-to-autonomously-prescribe-medication-refills/
- A further attempt to make the output of AI more accurate: https://www.techtarget.com/searchdatamanagement/news/366637142/New-Databricks-tool-aims-to-up-agentic-AI-response-accuracy
- Why Yan LeCun left Facebook's AI lab: https://arstechnica.com/ai/2026/01/computer-scientist-yann-lecun-intelligence-really-is-about-learning/
- Businesses can turn the weaknesses of AI against their competitors: https://www.computerworld.com/article/4114017/companies-can-compete-against-ai-by-delivering-what-ai-cant.html
- So its going to be a bit longer before AI destroys us all? https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity
- Prompt injection attacks continue to be a problem for AI: https://www.theregister.com/2026/01/08/openai_chatgpt_prompt_injection/
- AI is being used to write malware, but it's just as full of hallucinations as any other code generated by AI: https://www.theregister.com/2026/01/08/criminals_vibe_coding_malware/
- Generative AI in the financial services industry: https://www.techrepublic.com/article/generative-ai-financial-services/
- Putting AI in heavy construction machinery: https://dataconomy.com/2026/01/08/caterpillar-partners-with-nvidia-to-put-ai-in-excavators/
- Ford wants to use AI to personalise vehicles to their drivers: https://arstechnica.com/cars/2026/01/in-car-ai-assistant-coming-to-fords-and-lincolns-in-2027/
- The distinction between data scientist and AI engineer: https://www.kdnuggets.com/data-scientist-vs-ai-engineer-which-career-should-you-choose-in-2026
- Can sucking carbon dioxide from the air reduce the climate impact of AI data centres? https://www.theregister.com/2026/01/07/new_carbon_capture_tech/
- What you can and can't do with AI generated code: https://www.kdnuggets.com/vibe-code-reality-check-what-you-can-actually-build-with-only-ai
- Some approaches for CIOs to handle the issues raised by AI: https://www.informationweek.com/it-leadership/2026-cio-trend-from-seat-at-the-table-to-the-ai-hot-seat
- Some predictions for AI in 2026: https://www.informationweek.com/machine-learning-ai/13-unexpected-under-the-radar-predictions-for-2026
- Poisoning data sets to prevent stolen AI being used: https://www.theregister.com/2026/01/06/ai_data_pollution_defense/
- No AI should ever assume that its users have good intentions: https://arstechnica.com/tech-policy/2026/01/grok-assumes-users-seeking-images-of-underage-girls-have-good-intent/
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.