ChatGPT Isn't 'Hallucinating'–It's Bullshitting💩💩: Researchers
+ OpenAI launches GPT-4o mini (takes on Claude),
Welcome to yet another edition of NextBigWhat’s AI Newsletter that brings you a (human) curated mix of ideas, news and useful tools.
Big idea of the day
"The number one interview question in my mind of 2024 is: Tell me how you've used AI in your job or at home”: LinkedIn COO Dan Shapero
AI News
The big news is that OpenAI has launched GPT-4o mini, a smaller, cheaper variation of its GPT-4o AI model, replacing GPT-3.5 Turbo, with support for all multimodal formats coming soon.
GPT-4o mini costs $0.15 per 1M input tokens and $0.60 per 1M output tokens, prices lower than those of Claude 3 Haiku and Gemini 1.5 Flash
OpenAI has talked to chip designers including Broadcom about developing an AI server chip and hired former members of a Google unit that produces TPUs
Google, Anthropic, OpenAI, Microsoft, Nvidia, and others form the Coalition for Secure AI to share best practices and open-source methodologies for secure AI deployment.
Meta decides to suspend its generative AI tools in Brazil after the government objected to Meta's new privacy policy on using personal data to train AI models
Nvidia and Mistral announce Mistral NeMo, a 12B-parameter model with a context window of up to 128k tokens, available under the Apache 2.0 open-source license
AI Products
Produkto: Launch your business in seconds using AI
Builco: Build MVPs with Next.js using AI in minutes
Startt: Create a landing page for your product in seconds
Archie: Design and plan software in minutes with a single prompt
*represents sponsored product - connect with us to promote your business.
The big news? Well, ChatGPT Isn't 'Hallucinating'–It's Bullshitting, as per researchers and academics.
Among philosophers, “bullshit” has a specialist meaning. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshittern is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.
When it goes wrong, it isn’t because it hasn’t succeeded in representing the world this time; it never tries to represent the world! Calling its falsehoods “hallucinations” doesn’t capture this feature.(via).
quote of the day
We’d love to hear from you on what else would you like us to cover in the AI newsletter? Just hit the reply button with your ideas!
Keep reading with a 7-day free trial
Subscribe to NBW: Startups, AI and Audiobooks to keep reading this post and get 7 days of free access to the full post archives.