[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
89
4K
More threads by Saif

G Bangladesh Defense

Stop taking health advice from AI-generated social media videos

Ahnaf Tahmeed Purna

1778805207898.webp

Photo: Orchid Chakma

A cartoon garlic clove aggressively scrubbing toxins off of blood vessels. A bright yellow turmeric root polishing the liver. Spinach being dragged across the colon like a sponge wiping everything clean.

You’ve seen these short videos. Everyone has. These AI-generated videos are designed to be oddly satisfying. Clean visuals, simple ideas, and a reassuring sense that health is just a matter of “cleansing” the body the right way. One food, one organ, one problem solved.

But what makes them effective is exactly what makes them misleading. They take real human anatomy and turn it into a cartoon system of dirt and detergents. They replace medical complexity with visual certainty, where turmeric is not a spice with mild anti-inflammatory properties, but a liver “detox tool”, and garlic is not a food with limited cardiovascular associations, but a direct pipe-cleaner for arteries.

As a medical student, one of the lessons you learn in your early years is that the human body seldom behaves in simple, linear ways. There is no single food that “cleans” an organ, no universal detox pathway that can be activated through dietary hacks, and no shortcut that bypasses physiology. Yet, social media thrives on exactly the opposite idea: health is a puzzle with easy answers waiting to be unlocked. It is emotionally entrancing. It gives viewers a sense of control in a system that often feels overwhelming. This is where AI-generated content fits in. It does not just spread misinformation; it packages it in a format that feels intuitive and passively rewires how people understand their own bodies.

The danger today is not only that false health advice exists online, but that it is increasingly being generated at scale by systems that are not accountable to evidence. AI tools can replicate the tone of educational content while stripping away the safeguards of medical accuracy. Combined with animation tools and content templates, this results in a flood of videos that look like simplified medical education but are often detached from clinical reality.

In clinical practice, one pattern physicians notice is that people interpret symptoms through whatever information is most accessible to them at the time. Increasingly, today, that information is coming from short-form social media content rather than trained professionals. A persistent headache becomes “toxins”. Fatigue becomes “deficiency”. Digestive discomfort becomes a “colon cleanse issue”. By the time real medical attention is sought, the narrative has already been shaped, sometimes in ways that delay proper diagnosis. This does not happen because people are careless. It happens because the content is persuasive, visually clear, and emotionally reassuring. It offers certainty where medicine often cannot.

What makes AI-generated health content particularly difficult to challenge is its tone. It sounds confident, structured, and explanatory. In medicine, however, confidence is not the same as facticity. Real clinical guidance is always cautious, conditional, and inherently context-dependent. It does not promise universal fixes because it cannot. But caution does not perform well on social media; certainty does. And so, that certainty gets amplified, even when it is unsupported.

It would be easy to place all responsibility on viewers, but that would be incomplete. Platforms are designed to maximise engagement, not accuracy. AI-generated content is cheap, fast, and scalable. Medical correction is slow, nuanced, and often ignored in algorithmic systems. This creates a structural imbalance: evidence-based medicine competes with content optimised for attention. The result is not just about spreading misinformation but rather about misinformation outperforming information.

The solution is not to disconnect from digital platforms entirely. They are now part of how people learn and communicate. But there needs to be a stronger culture of scepticism when it comes to health content, especially content that feels too simple.

A useful question we can ask ourselves is, does this explanation acknowledge complexity, or does it discard it? Real medicine rarely fits into a single cause-and-effect story. When something claims to do so, it deserves scrutiny.

Equally important is where we choose to get medical understanding from. Not all information sources carry the same weight. Peer-reviewed literature, trained clinicians, and established medical institutions exist for a reason. They are built on systems of validation, emendation, and accountability.

There is an implicit risk in the way health information is evolving online. It is not just that false claims are circulating. It is that the boundary between education and entertainment is blurring. When a cartoon vegetable can convincingly “clean” your colon in 30 seconds, and when that feels more intuitive than actual physiology, we are no longer just dealing with misinformation. We are dealing with a shift in how people understand their own bodies.

Medicine is not a set of visual metaphors nor a collection of easy fixes. It is a discipline where patterns are interpreted through evidence, probability, and clinical judgment rather than certainty or shortcuts. And the more we replace that complexity with algorithm-friendly simplicity, the further we drift from what a genuine understanding of health is supposed to be.

Purna is a fourth-year medical student at Sirajganj Medical College.​
 

Doctors can't be replaced: Why we need an “arguable” AI partner

Farjana Yesmin
Updated: 15 May 2026, 13: 35

1778889912359.webp

Doctors wearing vr simulation with hologram medical technologyrawpixel.com / roungroat

I remember the long, quiet queues at the village clinics in Satkhira where I grew up. Families waited for hours under the heat, speaking in low voices, hoping for a few minutes with a doctor who was likely exhausted. In those settings, the primary challenge is not a lack of data. It is a lack of time.

Bangladesh currently has roughly 0.7 physicians and less than one hospital bed for every 1,000 people. These numbers fall far below World Health Organization recommendations. With a national shortage of over 90,000 doctors, any technological solution we introduce must solve this crisis rather than worsen it.

In late April 2026, Google DeepMind announced its "AI co-clinician" research initiative. This vision suggests a "triadic" model of care: an AI agent working alongside both the patient and the doctor. It is designed to extend a clinician’s reach while keeping the human expert in control. While this is a significant technical achievement, we must ask a difficult question: If these systems are built for well-resourced hospitals in the Global North, will they truly help a rural health complex in Bangladesh, or will they create new forms of inequality?

The vision of a reasoning partner

The DeepMind initiative moves away from the old idea of AI as a simple "black box" that gives a single answer. Their system was tested using the "NOHARM" framework, which focuses on avoiding errors of omission and commission. In blind evaluations, physicians often preferred its synthesized evidence over traditional tools.

This reflects a shift from AI as a judge to AI as a collaborator. It is a principle Bangladesh should endorse, but with caution. A generic "co-clinician" designed in London or California may not understand the messy, fragmented reality of our local healthcare system.

Why collaboration is not enough

In my research on "arguable systems," I have argued that a co-clinician should do more than just offer a recommendation for a doctor to accept or reject. True collaboration requires the ability to disagree.

A doctor in a busy upazila health complex might need to argue with the AI. The patient in front of her might come from a community the AI never encountered during its training. They might speak a dialect the model cannot parse or present symptoms that do not fit the clean patterns of Western datasets.

When an algorithm fails to listen, it commits what philosophers call "epistemic injustice." This happens in two ways. First, "testimonial injustice" occurs when the AI fails to trust what a patient knows about their own body because that patient is not from a wealthy or digitally recorded demographic. Second, "hermeneutical injustice" happens when the patient lacks the specific vocabulary the AI expects. A truly collaborative system must be designed to notice these gaps and make its own uncertainty visible to the doctor.

The privacy-explainability tension

Bangladesh is currently developing its digital health infrastructure and data protection laws. Our hospitals cannot simply pool patient records into a central server for legal and ethical reasons.

Federated learning offers a solution by allowing hospitals to train shared models without moving raw patient data. My work on the MedHE framework uses encryption to create a "fortress" for patient privacy. However, there is a hidden cost: the "noise" used to protect privacy can sometimes hide rare diseases or the early signals of an outbreak.

When we build or deploy these models, we should not only ask if they are accurate. We must ask if they listen. We must ensure that when a patient’s life depends on it, the doctor remains the final authority, supported by a partner that knows its own limits.

In Bangladesh, if privacy settings are too aggressive, a dengue early-warning system might fail to detect a cluster in a remote district because that cluster appears as statistical noise. We need "equity-aware" privacy. The goal should not be mathematical purity, but clinical truth for the most vulnerable populations.

Bias as a local reality

We often treat algorithmic bias as a technical bug, but it is actually a reflection of the training environment. An ECG-based heart disease predictor trained mostly on men may systematically under-diagnose women who show different symptoms.

In my research on fairness-aware representation learning, we train models to ignore sensitive characteristics like gender or income when they lead to discriminatory outcomes. For a co-clinician to work in Bangladesh, it must be evaluated on our own patient populations and our own languages. Fairness cannot be an afterthought.

Bangladesh as a research partner

Our country is not a passive recipient of Western technology. Local researchers are already building solutions that fit our specific context.

In a study I conducted with 14 healthcare professionals in Bangladesh, we found that clinicians strongly preferred "hybrid" explanations. These systems combine data-driven insights with established medical rules. More than half of the clinicians expressed trust for actual clinical use because they could see the logic behind the suggestion.

From multilingual triage apps to AI-assisted referral systems for pregnant women, a local ecosystem is growing. Our policymakers should treat Bangladesh not as a testing ground for foreign models, but as a partner with its own research capacity.

A roadmap for the future

To move forward, I propose five essential steps for integrating AI into our clinics:

1. Demand arguability: Systems must allow doctors to contest reasoning and see structured uncertainty. A doctor should be able to argue with the AI and win.

2. Local fairness benchmarks: We need our own test sets and definitions of what constitutes a harmful error in a rural setting.

3. Equity-aware privacy: We must ensure that privacy protections do not erase the signals of marginalized communities or rare conditions.

4. Invest in collaboration research: We need large-scale trials that measure trust and workflow integration, not just technical accuracy.

5. National AI audit body: A technical and ethical board should monitor algorithms before and after deployment to prevent bias or privacy breaches.

It's about the patient's life

I became a research scientist because I wanted to build technology that finally hears the voices I heard in those Satkhira queues. The DeepMind co-clinician is an impressive tool, but its success in Bangladesh will depend on whether it can adapt to us.

When we build or deploy these models, we should not only ask if they are accurate. We must ask if they listen. We must ensure that when a patient’s life depends on it, the doctor remains the final authority, supported by a partner that knows its own limits.​
 

Latest Posts

Back