Skip to content
ScienceBlog.com
  • Featured Blogs
    • EU Horizon Blog
    • ESA Tracker
    • Experimental Frontiers
    • Josh Mitteldorf’s Aging Matters
    • Dr. Lu Zhang’s Gondwanaland
    • NeuroEdge
    • NIAAA
    • SciChi
    • The Poetry of Science
    • Wild Science
  • Topics
    • Brain & Behavior
    • Earth, Energy & Environment
    • Health
    • Life & Non-humans
    • Physics & Mathematics
    • Social Sciences
    • Space
    • Technology
  • Our Substack
  • Follow Us!
    • Bluesky
    • Threads
    • FaceBook
    • Google News
    • Twitter/X
  • Contribute/Contact

Large language models

Megan

Why GPT can’t think like us

ChatGPT app icon on a smartphone screen

Highlighting need for caution, researchers find AIs are irrational, but not like humans

Screenshots of chatbot dialog

A new way to let AI chatbots converse all day without crashing

Ohio State logo

Researchers developing AI to make the internet more accessible

Illustration of AI

There’s a faster, cheaper way to train large language models

MIT researchers found that massive neural network models that are similar to large language models are capable of containing smaller linear models inside their hidden layers, which the large models could train to complete a new task using simple learning algorithms. Credits:Image: Jose-Luis Olivares, MIT

How language models like ChatGPT learn new tasks from just a few examples

Substack subscription form sign up

Comments

  • Karoly Mirnics on Common Prescription Drugs May Disrupt Cholesterol Pathways in the Womb and Raise Autism Risk
  • Aizen on Laziness helped lead to extinction of Homo erectus
  • Norwood johnson on Electrons in New Crystals Behave as If They Live in Four Dimensions
  • ScienceBlog.com on Hidden Geometry Could Finally Fix Quantum Computers
  • Theo Prinse on America Is Going Back to the Moon. This Time, It Plans to Stay
© 2026 ScienceBlog.com | Follow our RSS / XML feed