{"id":315,"date":"2026-03-09T06:58:45","date_gmt":"2026-03-09T13:58:45","guid":{"rendered":"https:\/\/scienceblog.com\/neuroedge\/?p=315"},"modified":"2026-03-09T06:58:45","modified_gmt":"2026-03-09T13:58:45","slug":"ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones","status":"publish","type":"post","link":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/","title":{"rendered":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones"},"content":{"rendered":"<p>Slapping a label on AI-generated content is the regulatory world&#8217;s current favourite answer to the misinformation problem. Transparent, scalable, required by law in China and under the EU AI Act, endorsed by Meta and X. The logic seems obvious enough: tell people a machine wrote something and they&#8217;ll scrutinise it harder. They didn&#8217;t, as it turns out. Or rather, they did scrutinise it harder, just not quite in the way anyone hoped.<\/p>\n<p>A study out this week in the Journal of Science Communication tested exactly this, and the findings are uncomfortable for anyone who&#8217;s spent time arguing that transparency labels are an adequate safeguard. Teng Lin, a PhD candidate at the University of Chinese Academy of Social Sciences in Beijing, working with Master&#8217;s student Yiqing Zhang, ran a controlled experiment with 433 participants recruited through an online platform between March and May last year. The setup was deliberately straightforward: participants read and rated eight social media posts on food safety and disease topics, each formatted as a Weibo post, some labelled as AI-generated and some not. Half the posts were accurate. Half weren&#8217;t.<\/p>\n<p>The posts themselves came from a rather elegant source: China&#8217;s official Science Rumour Debunking Platform, which publishes expert-verified lists of debunked health claims. Lin and Zhang used GPT-4 to rewrite these into both accurate and misleading Sina Weibo-style posts (the misleading versions mirrored the original rumours; the accurate ones preserved the debunking). Each participant saw all eight posts in randomised order, rating credibility on a five-point scale after each one. The AI disclosure label, when present, appeared in red at the top: &#8220;Attention: The content was detected as being generated by AI.&#8221;<\/p>\n<p>Simple enough. And the results were, in a word, backwards.<\/p>\n<p>&#8220;Our most important finding is what we call a &#8216;truth-falsity crossover effect,'&#8221; says Lin. &#8220;The same AI label pushes credibility in opposite directions depending on whether the information is true or false: it reduces the credibility of true messages and increases the credibility of false ones.&#8221; The interaction was large and statistically robust, surviving checks for individual post characteristics and participants&#8217; prior knowledge. It wasn&#8217;t noise.<\/p>\n<p>Why? Two cognitive mechanisms seem to be doing the damage, and they pull in the same direction even though they operate differently. The first is something researchers call the machine heuristic: a generalised tendency to perceive AI-generated content as objective, neutral, data-driven. When you see a label announcing AI authorship, you may, more or less automatically, reach for a mental shortcut that equates machine production with factual reliability. This works fine (or at least, it doesn&#8217;t actively backfire) when the content actually is factual and reliable. The trouble is that misinformation written in a confident, data-citing pseudo-scientific style looks, through that particular lens, exactly like what a trustworthy machine would produce. It fits the template. Correct scientific information, which tends to involve qualification and interpretive nuance rather than confident factual assertion, often doesn&#8217;t fit it as neatly.<\/p>\n<p>The second mechanism runs through something called Stereotype Content Theory, which holds that we tend to evaluate things (people, institutions, technologies) along two axes: warmth and competence. AI consistently scores high on perceived competence and low on warmth: efficient, powerful, rather cold. In the context of science communication, that profile may actively disadvantage correct information. Good scientific explanation isn&#8217;t just technically accurate; it involves contextualisation, hedging, acknowledgement of uncertainty. Those are precisely the qualities that the &#8220;cold, competent machine&#8221; stereotype discounts. &#8220;We focused on science-related information shared on social media,&#8221; Lin notes, and it is worth sitting with that for a moment, because science communication is probably more exposed here than almost any other domain, because readers can&#8217;t independently verify what they&#8217;re reading, so they rely on source cues, and the source cue in this case is actively misdirecting them.<\/p>\n<p>Individual attitudes added a wrinkle. Participants who held strongly negative views of AI penalised correct information even more when it wore the label. Among those same AI-sceptics, the credibility boost for misinformation was reduced, but not eliminated; it was topic-dependent, weakening for one of the two subject areas but persisting in the other. This isn&#8217;t what &#8220;algorithm aversion&#8221; research would lead you to expect. The theory suggests that people who dislike AI will distrust AI-generated content across the board. What Lin and Zhang found instead is more asymmetric: strong negative attitudes make things worse for correct information while only partially helping with false claims. Being suspicious of AI, in other words, is not actually protective in the way you might hope.<\/p>\n<p>How involved participants were with the topic barely mattered. That&#8217;s perhaps the finding that should worry communication researchers most. The heuristic processing that&#8217;s doing the damage isn&#8217;t a low-attention phenomenon, something that kicks in only when people aren&#8217;t really paying attention. Even engaged readers were affected.<\/p>\n<p>Lin is careful not to overstate what follows. &#8220;In our paper we put forward some recommendations, although they need further research to be validated,&#8221; he says. The study used text only, stripped out social endorsement cues like likes and reposts, and was conducted in a specifically Chinese platform context where public attitudes to AI may differ from Western samples. Whether this crossover effect generalises to video, audio, or images is genuinely unknown.<\/p>\n<p>What the findings do suggest is that a single disclosure label isn&#8217;t doing enough cognitive work on its own. &#8220;One proposal is to implement a dual-labeling approach,&#8221; Lin explains. &#8220;Instead of simply stating that the content is AI-generated, the label could also include a disclaimer indicating that the information has not been independently verified, or add a risk warning.&#8221; The idea is to add friction rather than merely a flag. A separate suggestion takes a tiered approach: &#8220;Different types of scientific information carry different levels of risk. For example, medical or health-related information may require a stronger warning, while information about new technologies may involve lower risk. So we suggest using different levels of disclosure depending on the type and risk level of the content.&#8221;<\/p>\n<p>Regulators across the world are, right now, building transparency requirements into law on the assumption that labelling AI content is protective. This study is a fairly direct challenge to that assumption; not proof it&#8217;s wrong, but evidence that the relationship between disclosure and credibility is more tangled, more counterintuitive, than the policy currently reflects. A label that redistributes credibility toward what&#8217;s false is arguably worse than no label at all. That&#8217;s a conclusion the regulatory conversation hasn&#8217;t quite caught up to yet.<\/p>\n<p>DOI \/ Source: <a href=\"https:\/\/doi.org\/10.22323\/358020260107085703\">https:\/\/doi.org\/10.22323\/358020260107085703<\/a><\/p>\n<hr \/>\n<h2>Frequently Asked Questions<\/h2>\n<p><strong>Why would an AI label make misinformation seem more trustworthy, not less?<\/strong> The label seems to activate a mental shortcut researchers call the &#8220;machine heuristic,&#8221; a tendency to equate AI production with factual objectivity. Misinformation written in a confident, data-citing style fits that template neatly. Correct scientific information, which tends to involve qualification and interpretive nuance, often doesn&#8217;t, so it gets discounted by the same shortcut rather than benefiting from it.<\/p>\n<p><strong>Does this mean AI transparency labels are making the misinformation problem worse?<\/strong> The study can&#8217;t say that definitively; it was conducted with Chinese Weibo users on two topic areas, without the likes and reposts that appear on real platforms. But the results suggest the single-label model being built into law by the EU AI Act and Chinese regulations may not be adequate on its own, and could be actively counterproductive in some conditions. Whether that scales to a net harm in the real world depends on a lot of variables this study doesn&#8217;t cover.<\/p>\n<p><strong>Is the effect worse if you already distrust AI?<\/strong> Somewhat, but not in the way you&#8217;d expect. People with strongly negative attitudes toward AI penalised correct information more harshly when it carried the label. Their scepticism was asymmetric: it fell harder on verified information than on plausible-sounding falsehoods. Being suspicious of AI, it turns out, doesn&#8217;t protect you against the crossover effect; it may make the wrong half of it considerably worse.<\/p>\n<p><strong>What would actually work better than the current approach?<\/strong> The researchers propose two directions: a dual-label that pairs the AI disclosure with a caveat that the information hasn&#8217;t been independently verified, and a tiered system that calibrates warning strength to content risk, with stronger flags for health information and lighter ones for lower-stakes topics. Neither has been tested empirically yet, but both aim to add reasoning scaffolding that the current single label fails to provide.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Slapping a label on AI-generated content is the regulatory world&#8217;s current favourite answer to the misinformation problem. Transparent, scalable, required by law in China and under the EU AI Act, endorsed by Meta and X. The logic seems obvious enough: tell people a machine wrote something and they&#8217;ll scrutinise it harder. They didn&#8217;t, as it &#8230; <a title=\"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones\" class=\"read-more\" href=\"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/\" aria-label=\"Read more about AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones\">Read more<\/a><\/p>\n","protected":false},"author":2,"featured_media":316,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[2,10,9],"tags":[],"class_list":["post-315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-automation-efficiency","category-ethics","category-society","generate-columns","tablet-grid-50","mobile-grid-100","grid-parent","grid-50"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones - NeuroEdge<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones\" \/>\n<meta property=\"og:description\" content=\"Slapping a label on AI-generated content is the regulatory world&#8217;s current favourite answer to the misinformation problem. Transparent, scalable, required by law in China and under the EU AI Act, endorsed by Meta and X. The logic seems obvious enough: tell people a machine wrote something and they&#8217;ll scrutinise it harder. They didn&#8217;t, as it ... Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/\" \/>\n<meta property=\"og:site_name\" content=\"NeuroEdge\" \/>\n<meta property=\"article:author\" content=\"http:\/\/www.facebook.com\/scienceblogfan\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-09T13:58:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"700\" \/>\n\t<meta property=\"og:image:height\" content=\"394\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"ScienceBlog.com\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@http:\/\/twitter.com\/#!\/scienceblogtwit\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"ScienceBlog.com\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/\"},\"author\":{\"name\":\"ScienceBlog.com\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/person\\\/ed241920497bad0017dd8a166c449c7a\"},\"headline\":\"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones\",\"datePublished\":\"2026-03-09T13:58:45+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/\"},\"wordCount\":1435,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2026\\\/03\\\/ai-warning-sign.webp\",\"articleSection\":[\"Automation &amp; Efficiency\",\"Ethics\",\"Society\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#respond\"]}],\"copyrightYear\":\"2026\",\"copyrightHolder\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/#organization\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/\",\"name\":\"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones - NeuroEdge\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2026\\\/03\\\/ai-warning-sign.webp\",\"datePublished\":\"2026-03-09T13:58:45+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#primaryimage\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2026\\\/03\\\/ai-warning-sign.webp\",\"contentUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2026\\\/03\\\/ai-warning-sign.webp\",\"width\":700,\"height\":394,\"caption\":\"big yellow ai warning sign\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/2026\\\/03\\\/09\\\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#website\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\",\"name\":\"NeuroEdge\",\"description\":\"A data-driven look at neuroscience and AI, for investors, policymakers, and innovators.\",\"publisher\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#organization\",\"name\":\"NeuroEdge\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/04\\\/cropped-neuroedge_logo.jpg\",\"contentUrl\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/wp-content\\\/uploads\\\/sites\\\/14\\\/2025\\\/04\\\/cropped-neuroedge_logo.jpg\",\"width\":955,\"height\":191,\"caption\":\"NeuroEdge\"},\"image\":{\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/#\\\/schema\\\/person\\\/ed241920497bad0017dd8a166c449c7a\",\"name\":\"ScienceBlog.com\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g\",\"caption\":\"ScienceBlog.com\"},\"sameAs\":[\"https:\\\/\\\/scienceblog.com\",\"http:\\\/\\\/www.facebook.com\\\/scienceblogfan\",\"https:\\\/\\\/x.com\\\/http:\\\/\\\/twitter.com\\\/#!\\\/scienceblogtwit\"],\"url\":\"https:\\\/\\\/scienceblog.com\\\/neuroedge\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones - NeuroEdge","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/","og_locale":"en_US","og_type":"article","og_title":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones","og_description":"Slapping a label on AI-generated content is the regulatory world&#8217;s current favourite answer to the misinformation problem. Transparent, scalable, required by law in China and under the EU AI Act, endorsed by Meta and X. The logic seems obvious enough: tell people a machine wrote something and they&#8217;ll scrutinise it harder. They didn&#8217;t, as it ... Read more","og_url":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/","og_site_name":"NeuroEdge","article_author":"http:\/\/www.facebook.com\/scienceblogfan","article_published_time":"2026-03-09T13:58:45+00:00","og_image":[{"width":700,"height":394,"url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","type":"image\/webp"}],"author":"ScienceBlog.com","twitter_card":"summary_large_image","twitter_creator":"@http:\/\/twitter.com\/#!\/scienceblogtwit","twitter_misc":{"Written by":"ScienceBlog.com","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#article","isPartOf":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/"},"author":{"name":"ScienceBlog.com","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/person\/ed241920497bad0017dd8a166c449c7a"},"headline":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones","datePublished":"2026-03-09T13:58:45+00:00","mainEntityOfPage":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/"},"wordCount":1435,"commentCount":0,"publisher":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#organization"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#primaryimage"},"thumbnailUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","articleSection":["Automation &amp; Efficiency","Ethics","Society"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#respond"]}],"copyrightYear":"2026","copyrightHolder":{"@id":"https:\/\/scienceblog.com\/#organization"}},{"@type":"WebPage","@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/","url":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/","name":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones - NeuroEdge","isPartOf":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#website"},"primaryImageOfPage":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#primaryimage"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#primaryimage"},"thumbnailUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","datePublished":"2026-03-09T13:58:45+00:00","breadcrumb":{"@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#primaryimage","url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","contentUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","width":700,"height":394,"caption":"big yellow ai warning sign"},{"@type":"BreadcrumbList","@id":"https:\/\/scienceblog.com\/neuroedge\/2026\/03\/09\/ai-disclosure-labels-reduce-trust-in-true-science-posts-while-boosting-false-ones\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scienceblog.com\/neuroedge\/"},{"@type":"ListItem","position":2,"name":"AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones"}]},{"@type":"WebSite","@id":"https:\/\/scienceblog.com\/neuroedge\/#website","url":"https:\/\/scienceblog.com\/neuroedge\/","name":"NeuroEdge","description":"A data-driven look at neuroscience and AI, for investors, policymakers, and innovators.","publisher":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scienceblog.com\/neuroedge\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scienceblog.com\/neuroedge\/#organization","name":"NeuroEdge","url":"https:\/\/scienceblog.com\/neuroedge\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/logo\/image\/","url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/cropped-neuroedge_logo.jpg","contentUrl":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/cropped-neuroedge_logo.jpg","width":955,"height":191,"caption":"NeuroEdge"},"image":{"@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/scienceblog.com\/neuroedge\/#\/schema\/person\/ed241920497bad0017dd8a166c449c7a","name":"ScienceBlog.com","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/62b833078ac09a3922d7e2aa4fb18ef427ce5212b5dd3a5d12a73a921b7167e3?s=96&d=mm&r=g","caption":"ScienceBlog.com"},"sameAs":["https:\/\/scienceblog.com","http:\/\/www.facebook.com\/scienceblogfan","https:\/\/x.com\/http:\/\/twitter.com\/#!\/scienceblogtwit"],"url":"https:\/\/scienceblog.com\/neuroedge\/author\/admin\/"}]}},"jetpack_featured_media_url":"https:\/\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/03\/ai-warning-sign.webp","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":225,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/07\/22\/ai-makes-lies-look-like-the-truth-and-harder-to-spot\/","url_meta":{"origin":315,"position":0},"title":"AI Makes Lies Look Like the Truth\u2014and Harder to Spot","author":"NeuroEdge","date":"July 22, 2025","format":false,"excerpt":"In a new study published in PNAS Nexus, researchers show that AI-generated paraphrases of disinformation\u2014dubbed \"AIPasta\"\u2014can make false claims appear more credible and widely shared. Unlike traditional copy-and-paste propaganda, AIPasta boosts perceptions of social consensus while flying under the radar of existing AI-detection tools. When repetition meets AI, falsehoods gain\u2026","rel":"","context":"In &quot;Society&quot;","block_context":{"text":"Society","link":"https:\/\/scienceblog.com\/neuroedge\/category\/society\/"},"img":{"alt_text":"#StopTheSteal AIPasta Stimuli: Profile images, usernames, and handles constructed by Jalbert et al. 2025. Profiles do not represent real users and were created from stock images and with handles that are not currently in use.","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/copypasta-ai.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/copypasta-ai.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/copypasta-ai.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":306,"url":"https:\/\/scienceblog.com\/neuroedge\/2026\/02\/25\/ai-scores-3-on-the-hardest-test-humans-could-write\/","url_meta":{"origin":315,"position":1},"title":"AI Scores 3% on the Hardest Test Humans Could Write","author":"NeuroEdge","date":"February 25, 2026","format":false,"excerpt":"Somewhere in the archive of Humanity's Last Exam sits a question about a Roman tombstone. Not the Latin inscription \u2014 that would be too easy \u2014 but the Palmyrene script running alongside it, a language spoken in ancient Syria and dead for seventeen centuries. The question was written by a\u2026","rel":"","context":"In &quot;Computational Innovation&quot;","block_context":{"text":"Computational Innovation","link":"https:\/\/scienceblog.com\/neuroedge\/category\/computational-innovation\/"},"img":{"alt_text":"Despite its apocalyptic name, Humanity\u2019s Last Exam isn\u2019t meant to suggest the end of human relevance. Instead, it highlights how much knowledge remains uniquely human and how far AI systems still have to go.","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/man-vs-robot-1408x792-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/man-vs-robot-1408x792-1.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/man-vs-robot-1408x792-1.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/man-vs-robot-1408x792-1.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":222,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/07\/17\/older-adults-cautiously-adopt-ai-demand-clear-labels\/","url_meta":{"origin":315,"position":2},"title":"Older Adults Cautiously Adopt AI, Demand Clear Labels","author":"NeuroEdge","date":"July 17, 2025","format":false,"excerpt":"More than half of Americans over 50 have used artificial intelligence technologies, from voice assistants to health apps, but they want transparency about when they're interacting with AI rather than humans. A new national poll reveals a generation cautiously navigating the AI revolution while demanding clearer information about potential risks.\u2026","rel":"","context":"In &quot;Society&quot;","block_context":{"text":"Society","link":"https:\/\/scienceblog.com\/neuroedge\/category\/society\/"},"img":{"alt_text":"Illustration of man speaking into a smart phone","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/0454_NPHA-AI-infographics_03-National-web-main_0.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/0454_NPHA-AI-infographics_03-National-web-main_0.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/0454_NPHA-AI-infographics_03-National-web-main_0.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/07\/0454_NPHA-AI-infographics_03-National-web-main_0.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":300,"url":"https:\/\/scienceblog.com\/neuroedge\/2026\/02\/23\/automated-arms-are-rewriting-how-we-find-the-catalysts-that-run-the-world\/","url_meta":{"origin":315,"position":3},"title":"Automated Arms Are Rewriting How We Find the Catalysts That Run the World","author":"NeuroEdge","date":"February 23, 2026","format":false,"excerpt":"At around midnight in a laboratory in Daejeon, South Korea, two robotic arms are working through a queue of 96 experiments nobody asked them to start. One grips a pipette, delivers a precise shot of sodium borohydride into a reaction vessel, and moves on. The other lifts a small cuvette,\u2026","rel":"","context":"In &quot;Computational Innovation&quot;","block_context":{"text":"Computational Innovation","link":"https:\/\/scienceblog.com\/neuroedge\/category\/computational-innovation\/"},"img":{"alt_text":"Robot-Based Automated Platform","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/Robot-Based-Automated-Platform.jpeg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/Robot-Based-Automated-Platform.jpeg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/Robot-Based-Automated-Platform.jpeg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/02\/Robot-Based-Automated-Platform.jpeg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":296,"url":"https:\/\/scienceblog.com\/neuroedge\/2026\/01\/23\/ai-is-making-scientists-stars-while-dimming-the-light-of-discovery\/","url_meta":{"origin":315,"position":4},"title":"AI Is Making Scientists Stars While Dimming the Light of Discovery","author":"NeuroEdge","date":"January 23, 2026","format":false,"excerpt":"Imagine you\u2019re a PhD student named Leo. You have two choices. You could spend the next five years in a dusty basement lab, trying to figure out a \"weird\" question about how the very first molecules of life sparked into existence. There\u2019s no data to help you, the experiments often\u2026","rel":"","context":"In &quot;Society&quot;","block_context":{"text":"Society","link":"https:\/\/scienceblog.com\/neuroedge\/category\/society\/"},"img":{"alt_text":"picasso style abstract science illustration","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/01\/ai-science-2.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/01\/ai-science-2.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/01\/ai-science-2.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2026\/01\/ai-science-2.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":31,"url":"https:\/\/scienceblog.com\/neuroedge\/2025\/04\/18\/ai-matches-non-specialists-in-medical-diagnosis\/","url_meta":{"origin":315,"position":5},"title":"AI Matches Non-Specialists In Medical Diagnosis","author":"NeuroEdge","date":"April 18, 2025","format":false,"excerpt":"The latest AI systems are now diagnosing medical conditions about as well as junior doctors, according to a sweeping new analysis that's likely to raise eyebrows across healthcare. While seasoned specialists still outperform the machines, this milestone suggests we're entering a new era where AI could meaningfully augment medical education\u2026","rel":"","context":"In &quot;Automation &amp; Efficiency&quot;","block_context":{"text":"Automation &amp; Efficiency","link":"https:\/\/scienceblog.com\/neuroedge\/category\/automation-efficiency\/"},"img":{"alt_text":"Clinician at monitoring equipment","src":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/medical-equipment-4099428_1280.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/medical-equipment-4099428_1280.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/medical-equipment-4099428_1280.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/medical-equipment-4099428_1280.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/scienceblog.com\/neuroedge\/wp-content\/uploads\/sites\/14\/2025\/04\/medical-equipment-4099428_1280.jpg?resize=1050%2C600&ssl=1 3x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/comments?post=315"}],"version-history":[{"count":1,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/315\/revisions"}],"predecessor-version":[{"id":317,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/posts\/315\/revisions\/317"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/media\/316"}],"wp:attachment":[{"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/media?parent=315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/categories?post=315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienceblog.com\/neuroedge\/wp-json\/wp\/v2\/tags?post=315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}