{"id":175,"date":"2025-05-22T05:48:22","date_gmt":"2025-05-22T12:48:22","guid":{"rendered":"https:\/\/scienceblog.com\/neuroedge\/?p=175"},"modified":"2025-05-22T05:48:22","modified_gmt":"2025-05-22T12:48:22","slug":"ai-learns-to-connect-sight-and-sound-like-humans-do","status":"publish","type":"post","link":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/","title":{"rendered":"AI Learns to Connect Sight and Sound Like Humans Do"},"content":{"rendered":"<p>Artificial intelligence systems are getting better at mimicking how humans naturally connect what they see with what they hear.<\/p>\n<p>MIT researchers have developed a new machine-learning approach that helps AI models <a href=\"https:\/\/arxiv.org\/pdf\/2505.01237\">automatically match corresponding audio and visual information from video clips<\/a>\u2014without needing human labels to guide the process. The breakthrough could eventually help robots better understand real-world environments where sound and vision work together.<\/p>\n<p>The research builds on how humans instinctively learn by linking different senses. When you watch someone play the cello, you naturally connect the musician&#8217;s bow movements with the music you&#8217;re hearing. The MIT team wanted to recreate this seamless integration in artificial systems.<\/p>\n<h2>Fine-Tuning Audio-Visual Connections<\/h2>\n<p>The researchers improved upon their earlier work by creating a method called CAV-MAE Sync, which learns more precise connections between specific video frames and the audio occurring at exactly those moments. Previously, their model would match an entire 10-second audio clip with just one random video frame\u2014like trying to sync a whole song with a single photograph.<\/p>\n<p>&#8220;We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities,&#8221; explains Andrew Rouditchenko, an MIT graduate student and co-author of the research.<\/p>\n<p>The new approach splits audio into smaller windows before processing, creating separate representations that correspond to each smaller time segment. During training, the model learns to associate individual video frames with the audio happening during just those frames\u2014a much more granular and realistic approach.<\/p>\n<h2>Solving Competing Objectives<\/h2>\n<p>The team tackled a fundamental challenge in AI training: balancing two different learning goals that can conflict with each other. The model needs to both reconstruct missing audio and visual information (like filling in blanks) and learn to associate similar sounds with similar images.<\/p>\n<p>These objectives were competing because they required the same internal representations to do double duty. The researchers solved this by introducing specialized &#8220;tokens&#8221;\u2014dedicated components that handle different aspects of learning without interfering with each other.<\/p>\n<p>Key improvements in the new system include:<\/p>\n<ul>\n<li>Fine-grained temporal alignment between audio segments and video frames<\/li>\n<li>Separate &#8220;global tokens&#8221; for learning cross-modal associations<\/li>\n<li>&#8220;Register tokens&#8221; that help focus on important details<\/li>\n<li>Better balance between reconstruction and contrastive learning objectives<\/li>\n<\/ul>\n<p>The architectural tweaks might sound technical, but they address a core problem in AI: how to learn multiple skills simultaneously without one interfering with another.<\/p>\n<h2>Real-World Applications<\/h2>\n<p>The enhanced model showed significant improvements in practical tasks. When asked to retrieve videos based on audio queries, it performed more accurately than previous versions. It also got better at predicting the type of scene from combined audio-visual cues\u2014distinguishing between the sound of a dog barking versus an instrument playing.<\/p>\n<p>&#8220;Sometimes, very simple ideas or little patterns you see in the data have big value when applied on top of a model you are working on,&#8221; notes lead author Edson Araujo, a graduate student at Goethe University in Germany.<\/p>\n<p>The research has immediate applications in journalism and film production, where the model could help automatically curate content by matching audio and video elements. Content creators could use it to find specific types of scenes or sounds within large video libraries.<\/p>\n<h2>Looking Forward<\/h2>\n<p>But the longer-term vision is more ambitious. The researchers want to eventually integrate this audio-visual technology into large language models\u2014the AI systems that power modern chatbots and virtual assistants.<\/p>\n<p>&#8220;Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,&#8221; Rouditchenko says.<\/p>\n<p>The team also hopes to enable their system to handle text data, which would be an important step toward creating comprehensive multimodal AI that processes language, sound, and vision together.<\/p>\n<p>For robots operating in real environments, this kind of integrated perception could prove crucial. Just as humans rely on both sight and sound to navigate the world, future robotic systems may need similar capabilities to interact naturally with their surroundings.<\/p>\n<p>The work represents another step toward AI systems that process information more like humans do\u2014not as separate streams of data, but as interconnected experiences that make sense of the world through multiple channels at once.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence systems are getting better at mimicking how humans naturally connect what they see with what they hear. MIT researchers have developed a new machine-learning approach that helps AI models automatically match corresponding audio and visual information from video clips\u2014without needing human labels to guide the process. The breakthrough could eventually help robots better &#8230; <a title=\"AI Learns to Connect Sight and Sound Like Humans Do\" class=\"read-more\" href=\"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/\" aria-label=\"Read more about AI Learns to Connect Sight and Sound Like Humans Do\">Read more<\/a><\/p>\n","protected":false},"author":1297,"featured_media":176,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[2,6],"tags":[],"class_list":["post-175","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-automation-efficiency","category-technology","generate-columns","tablet-grid-50","mobile-grid-100","grid-parent","grid-50"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>AI Learns to Connect Sight and Sound Like Humans Do - NeuroEdge<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Learns to Connect Sight and Sound Like Humans Do\" \/>\n<meta property=\"og:description\" content=\"Artificial intelligence systems are getting better at mimicking how humans naturally connect what they see with what they hear. MIT researchers have developed a new machine-learning approach that helps AI models automatically match corresponding audio and visual information from video clips\u2014without needing human labels to guide the process. The breakthrough could eventually help robots better ... Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/\" \/>\n<meta property=\"og:site_name\" content=\"NeuroEdge\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-22T12:48:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"900\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"NeuroEdge\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"NeuroEdge\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/\"},\"author\":{\"name\":\"NeuroEdge\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/person\\\/a13c664778e7eb97cb71e3e1ad356d2e\"},\"headline\":\"AI Learns to Connect Sight and Sound Like Humans Do\",\"datePublished\":\"2025-05-22T12:48:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/\"},\"wordCount\":725,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/05\\\/MIT-AV-Learning-01-press_0.jpg\",\"articleSection\":[\"Automation &amp; Efficiency\",\"Technology\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#respond\"]}],\"copyrightYear\":\"2025\",\"copyrightHolder\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/#organization\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/\",\"name\":\"AI Learns to Connect Sight and Sound Like Humans Do - NeuroEdge\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/05\\\/MIT-AV-Learning-01-press_0.jpg\",\"datePublished\":\"2025-05-22T12:48:22+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#primaryimage\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/05\\\/MIT-AV-Learning-01-press_0.jpg\",\"contentUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/05\\\/MIT-AV-Learning-01-press_0.jpg\",\"width\":900,\"height\":600,\"caption\":\"Illustration of Asian woman cellist\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2025\\\/05\\\/22\\\/ai-learns-to-connect-sight-and-sound-like-humans-do\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI Learns to Connect Sight and Sound Like Humans Do\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#website\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\",\"name\":\"NeuroEdge\",\"description\":\"A data-driven look at neuroscience and AI, for investors, policymakers, and innovators.\",\"publisher\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\",\"name\":\"NeuroEdge\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/04\\\/cropped-neuroedge_logo.jpg\",\"contentUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/04\\\/cropped-neuroedge_logo.jpg\",\"width\":955,\"height\":191,\"caption\":\"NeuroEdge\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/person\\\/a13c664778e7eb97cb71e3e1ad356d2e\",\"name\":\"NeuroEdge\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g\",\"caption\":\"NeuroEdge\"},\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/author\\\/neuroedge\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI Learns to Connect Sight and Sound Like Humans Do - NeuroEdge","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/","og_locale":"en_US","og_type":"article","og_title":"AI Learns to Connect Sight and Sound Like Humans Do","og_description":"Artificial intelligence systems are getting better at mimicking how humans naturally connect what they see with what they hear. MIT researchers have developed a new machine-learning approach that helps AI models automatically match corresponding audio and visual information from video clips\u2014without needing human labels to guide the process. The breakthrough could eventually help robots better ... Read more","og_url":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/","og_site_name":"NeuroEdge","article_published_time":"2025-05-22T12:48:22+00:00","og_image":[{"width":900,"height":600,"url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","type":"image\/jpeg"}],"author":"NeuroEdge","twitter_card":"summary_large_image","twitter_misc":{"Written by":"NeuroEdge","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#article","isPartOf":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/"},"author":{"name":"NeuroEdge","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/person\/a13c664778e7eb97cb71e3e1ad356d2e"},"headline":"AI Learns to Connect Sight and Sound Like Humans Do","datePublished":"2025-05-22T12:48:22+00:00","mainEntityOfPage":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/"},"wordCount":725,"commentCount":0,"publisher":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#organization"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#primaryimage"},"thumbnailUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","articleSection":["Automation &amp; Efficiency","Technology"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#respond"]}],"copyrightYear":"2025","copyrightHolder":{"@id":"https:\/\/scienceblog.com\/#organization"}},{"@type":"WebPage","@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/","url":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/","name":"AI Learns to Connect Sight and Sound Like Humans Do - NeuroEdge","isPartOf":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#primaryimage"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#primaryimage"},"thumbnailUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","datePublished":"2025-05-22T12:48:22+00:00","breadcrumb":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#primaryimage","url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","contentUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","width":900,"height":600,"caption":"Illustration of Asian woman cellist"},{"@type":"BreadcrumbList","@id":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/22\/ai-learns-to-connect-sight-and-sound-like-humans-do\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceblog.com\/neuroedge\/"},{"@type":"ListItem","position":2,"name":"AI Learns to Connect Sight and Sound Like Humans Do"}]},{"@type":"WebSite","@id":"https:\/\/scienceblog.com\/neuroedge\/#website","url":"https:\/\/scienceblog.com\/neuroedge\/","name":"NeuroEdge","description":"A data-driven look at neuroscience and AI, for investors, policymakers, and innovators.","publisher":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceblog.com\/neuroedge\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scienceblog.com\/neuroedge\/#organization","name":"NeuroEdge","url":"https:\/\/scienceblog.com\/neuroedge\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/logo\/image\/","url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/cropped-neuroedge_logo.jpg","contentUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/cropped-neuroedge_logo.jpg","width":955,"height":191,"caption":"NeuroEdge"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/person\/a13c664778e7eb97cb71e3e1ad356d2e","name":"NeuroEdge","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28782ec992e8763e1f8d41ddc10864e7d8cd4cb99bacea6224c4abe634bbabec?s=96&d=mm&r=g","caption":"NeuroEdge"},"url":"https:\/\/scienceblog.com\/neuroedge\/author\/neuroedge\/"}]}},"jetpack_featured_media_url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/MIT-AV-Learning-01-press_0.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":207,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/06\/20\/human-ai-teams-make-better-medical-diagnoses\/","url_meta":{"origin":175,"position":0},"title":"Human-AI Teams Make Better Medical Diagnoses","author":"NeuroEdge","date":"June 20, 2025","format":false,"excerpt":"Hybrid collectives consisting of humans and artificial intelligence make significantly more accurate medical diagnoses than either medical professionals or AI systems alone. New research analyzing over 40,000 diagnoses reveals that combining human expertise with AI models creates a powerful diagnostic partnership that outperforms traditional approaches. The study, published in Proceedings\u2026","rel":"","context":"In &quot;Automation &amp; Efficiency&quot;","block_context":{"text":"Automation &amp; Efficiency","link":"https:\/\/scienceblog.com\/neuroedge\/category\/automation-efficiency\/"},"img":{"alt_text":"Robot and doctor shaking hands","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/06\/Low-Res_MPIB-DoctorsKI.webp?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/06\/Low-Res_MPIB-DoctorsKI.webp?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/06\/Low-Res_MPIB-DoctorsKI.webp?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/06\/Low-Res_MPIB-DoctorsKI.webp?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":148,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/13\/flashes-of-hope-mits-light-and-sound-therapy-reverses-memory-loss-in-down-syndrome-mice\/","url_meta":{"origin":175,"position":1},"title":"Flashes of Hope: MIT\u2019s Light and Sound Therapy Reverses Memory Loss in Down Syndrome Mice","author":"NeuroEdge","date":"May 13, 2025","format":false,"excerpt":"MIT researchers have discovered that exposing mice with Down syndrome to specific light and sound patterns can significantly improve memory, enhance brain connectivity, and boost the formation of new neurons. This promising approach, which uses 40Hz sensory stimulation known as GENUS (gamma entrainment using sensory stimulation), could potentially open new\u2026","rel":"","context":"In &quot;Brain Health&quot;","block_context":{"text":"Brain Health","link":"https:\/\/scienceblog.com\/neuroedge\/category\/brain-health\/"},"img":{"alt_text":"Images from an MIT study show increased neurogenesis in mice exposed to 40Hz stimulation, as indicated by elevated levels of the markers Ki67 and EdU. Yellow arrows highlight cells expressing these markers, compared to mice exposed only to ambient light and sound.","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Neurogenesis-with-GENUS.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Neurogenesis-with-GENUS.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Neurogenesis-with-GENUS.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Neurogenesis-with-GENUS.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":73,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/04\/24\/ai-fails-to-read-human-social-cues\/","url_meta":{"origin":175,"position":2},"title":"AI Fails To Read Human Social Cues","author":"NeuroEdge","date":"April 24, 2025","format":false,"excerpt":"Despite rapid advances in artificial intelligence, humans still maintain a significant edge when it comes to understanding social interactions, according to new research from Johns Hopkins University that reveals fundamental limitations in AI's ability to interpret human behavior. The study, presented at the International Conference on Learning Representations, found that\u2026","rel":"","context":"In &quot;Computational Innovation&quot;","block_context":{"text":"Computational Innovation","link":"https:\/\/scienceblog.com\/neuroedge\/category\/computational-innovation\/"},"img":{"alt_text":"A man covering his eyes in embarassment","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/man-379800_1280.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/man-379800_1280.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/man-379800_1280.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/man-379800_1280.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":145,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/13\/ai-has-crossed-a-philosophical-threshold-new-study-argues-modern-systems-possess-free-will\/","url_meta":{"origin":175,"position":3},"title":"AI Has Crossed a Philosophical Threshold: New Study Argues Modern Systems Possess Free Will","author":"NeuroEdge","date":"May 13, 2025","format":false,"excerpt":"As artificial intelligence systems grow more sophisticated, they're challenging fundamental philosophical ideas about what makes humans unique. A new study published in the journal AI and Ethics argues that today's advanced AI systems utilizing large language models (LLMs) have crossed a significant threshold\u2014they now meet all three philosophical conditions for\u2026","rel":"","context":"In &quot;Ethics&quot;","block_context":{"text":"Ethics","link":"https:\/\/scienceblog.com\/neuroedge\/category\/ethics\/"},"img":{"alt_text":"Frank Martela from Aalto University","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Frank-Martela-from-Aalto-University.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Frank-Martela-from-Aalto-University.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Frank-Martela-from-Aalto-University.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/Frank-Martela-from-Aalto-University.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":192,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/05\/29\/why-ai-needs-leashes-not-guardrails-say-experts\/","url_meta":{"origin":175,"position":4},"title":"Why AI Needs Leashes, Not Guardrails, Say Experts","author":"NeuroEdge","date":"May 29, 2025","format":false,"excerpt":"The widespread push for AI \"guardrails\" fundamentally misunderstands how to regulate artificial intelligence safely and effectively, according to new research from University of Pennsylvania and University of Notre Dame scholars. Instead of fixed barriers that constrain innovation, policymakers should impose flexible \"leashes\" that allow AI to explore new domains while\u2026","rel":"","context":"In &quot;Automation &amp; Efficiency&quot;","block_context":{"text":"Automation &amp; Efficiency","link":"https:\/\/scienceblog.com\/neuroedge\/category\/automation-efficiency\/"},"img":{"alt_text":"ai dog on leash","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/ai-dog-on-leash.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/ai-dog-on-leash.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/ai-dog-on-leash.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/05\/ai-dog-on-leash.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":284,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/12\/29\/brain-model-discovers-neurons-that-reliably-predict-mistakes\/","url_meta":{"origin":175,"position":5},"title":"Brain Model Discovers Neurons That Reliably Predict Mistakes","author":"NeuroEdge","date":"December 29, 2025","format":false,"excerpt":"About 20 percent of neurons in a learning brain seem to be doing something counterintuitive. When these cells become more active, mistakes follow. A new computational model of the brain, built to mirror real neural circuits rather than optimize performance, stumbled onto this pattern while learning a simple visual task.\u2026","rel":"","context":"In &quot;Brain Health&quot;","block_context":{"text":"Brain Health","link":"https:\/\/scienceblog.com\/neuroedge\/category\/brain-health\/"},"img":{"alt_text":"neuron networks","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/12\/Screenshot-2025-12-29-at-8.52.01-AM.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/12\/Screenshot-2025-12-29-at-8.52.01-AM.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/12\/Screenshot-2025-12-29-at-8.52.01-AM.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/12\/Screenshot-2025-12-29-at-8.52.01-AM.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/175","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/users\/1297"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/comments?post=175"}],"version-history":[{"count":1,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/175\/revisions"}],"predecessor-version":[{"id":177,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/175\/revisions\/177"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/media\/176"}],"wp:attachment":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/media?parent=175"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/categories?post=175"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/tags?post=175"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}