Donate ☕
201 Military Defense Forums
[🇧🇩] - Artificial Intelligence-----It's challenges and Prospects in Bangladesh | Page 11 | PKDefense
Home Login Forums Wars Watch Videos
Serious discussion on defense, geopolitics, and global security.

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

Reply (Scroll)
Press space to scroll through posts
G Bangladesh Defense
[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
68
3K
More threads by Saif

Govt drafts AI policy to tap its potential, tackle concerns


1720572584409.webp



The government has formulated the draft National AI policy as it looks to make the best use of artificial intelligence to raise productivity and spur economic growth while dealing with the concerns presented by the technology spreading at a breakneck pace.

"This policy seeks to harness the benefits of AI while mitigating its risks, fostering innovation, and ensuring that AI technologies serve the best interests of the citizens and the nation as a whole," the draft said.

The Information and Communication Technology Division prepared the National AI Policy and published it recently.

The policy is expected to address the legal, ethical, and societal implications of AI effectively and efficiently.

It has placed a significant emphasis on public awareness and education, enlightening citizens about AI and its far-reaching benefits.

The objectives of the policy are to accelerate equitable economic growth and productivity through AI-driven optimisation, forecasting, and data-driven decision-making, and ensure efficiency and accessibility of public services through AI-enabled personalisation.

The draft comes as countries around the world race to prepare to deal with the changes being brought about by the fast-evolving technology.

The International Monetary Fund (IMF) has published its new AI Preparedness Index Dashboard for 174 economies, based on their readiness in four areas: digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation.

It showed Bangladesh's score stands at 0.38 compared to 0.49 of India, 0.37 of Pakistan, 0.35 of Nepal, 0.44 of Sri Lanka, 0.77 of the US, 0.64 of China, and 0.73 of Australia. Developed countries have a score of at least 0.7.

In Bangladesh, the government plans to adopt data-driven policy-making in every sector through AI-supported analytics and insights and nurture a skilled workforce that can utilise and build AI technologies.

It wants to embed AI in education and skills development so that the largely young population can meet the demands of the future.

The draft said the country will also foster a culture of AI research and innovation through public and private funding. It will ensure development and adhere to a robust ethical framework by establishing regulatory measures that uphold human rights in AI development and deployment.

The ICT Division, in collaboration with relevant ministries, industry, academia, and civil society, will take necessary steps to establish the institutional framework for the policy implementation, officials said.

It will set up an independent National Artificial Intelligence Center of Excellence (NAICE).

The NAICE will be responsible for coordination and monitoring of AI initiatives using key performance indicators and evaluation of AI initiatives' social, economic, and environmental impacts, guiding adjustments for maximum benefits and risk mitigation.

It will facilitate collaboration and knowledge-sharing among stakeholders, including government agencies, industry, academia, and civil society. It will ensure that any measures taken to regulate the technology are proportional to the risk and balanced to encourage innovation.

The government will form a high-level national AI advisory council to guide the implementation of sectoral AI initiatives.

The draft said the legal and regulatory frameworks are necessary for AI policy implementation.

The National Strategy for AI will be framed, and it will be updated every two years in accordance with the advancement of AI worldwide.

The strategy will include data retention policies, deal with the legal issues of data governance and ownership and focus on interoperability and data exchange.

According to IMF's economist Giovanni Melina, AI can increase productivity, boost economic growth, and lift incomes. However, it could also wipe out millions of jobs and widen inequality.

IMF's research has shown how AI is poised to reshape the global economy. It could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries.

But, on the brighter side, it also brings enormous potential to enhance the productivity of existing jobs for which AI can be a complementary tool and to create new jobs and even new industries.

Melina said most emerging market economies and low-income countries have smaller shares of high-skilled jobs than advanced economies, and so will likely be less affected and face fewer immediate disruptions from AI.

"At the same time, many of these countries lack the infrastructure or skilled workforces needed to harness AI's benefits, which could worsen inequality among nations."

The economist said the policy priority for emerging markets and developing economies should be to lay a strong foundation by investing in digital infrastructure and digital training for workers.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

How AI can help in disasters like the Milestone crash

1753407508152.webp

VISUAL: MONOROM POLOK

I was born and raised in Dhaka, where I spent the first 25 years of my life before moving to the US in 2015. Today, I work as a research assistant professor at the University of Oklahoma and serve as part of a research group, whose mission is to develop safer, more efficient aircraft through AI, digital twin technologies, and predictive maintenance. We work on reducing production costs, improving airworthiness, and minimising material waste.

When I heard about the FT-7 BGI jet crash into Milestone School and College in Dhaka's Uttara on July 21, 2025, which killed at least 32 people—including the pilot, teachers, and many young students—and injured over 150, I wasn't shocked. I was devastated, but not surprised.

Dhaka is one of the most overcrowded cities in the world, with a population density of about 23,234 citizens per square kilometre. Despite being the capital, it lacks proper zoning plans for sensitive infrastructure such as military air bases. The Kurmitola air base, from where the aircraft took off, is surrounded by densely packed residential areas and schools. When a malfunction occurred, there was simply nowhere safe for the pilot to go.

Even in the US, with its advanced infrastructure, similar disasters have occurred. Earlier this year, on January 29, a mid-air collision over the Potomac River near Washington, DC, between an American Airlines regional jet and a US Army Black Hawk helicopter tragically claimed all 67 lives aboard both aircraft. But such events are typically followed by systemic changes—strengthening flight corridors, improving air traffic control, and implementing AI for predictive monitoring.

In FY2024-25, the proposed defense budget in Bangladesh was Tk 42,010 crore. Over the years, it has purchased various aircraft; for instance, the FT-7 BGI jet involved in today's crash is an upgraded variant of the Soviet-era MiG-21, with Bangladesh acquiring 16 such aircraft from China between 2011 and 2013. But buying hardware alone doesn't ensure safety. Advanced systems require equally advanced maintenance, simulation, infrastructure, and disaster preparedness—all of which Bangladesh often struggles to adequately provide.

Even superpowers like the US and China are shifting toward AI and digital twin solutions to reduce costs. If they're investing in smarter systems to minimise spending and maximise safety, it's clear that Bangladesh must do the same—not as an option, but as a necessity.

Too often we focus solely on avoiding disaster. But we must also ask: what happens if a crash does occur?

Take the example of India's Air India Flight 171 crash on June 12, 2025. This Boeing 787 Dreamliner went down just 32 seconds after takeoff from Ahmedabad, en route to London, killing 260 of the 261 people on board and others on the ground when it impacted a medical hostel complex. While investigations are ongoing, initial findings point to mid-air engine failure. If onboard systems were designed to delay fire spread by even 2-3 minutes after such an event, many lives might have been saved. Today, technologies exist—such as advanced flame-retardant cabin materials and automatic fire suppression systems in critical areas—but they're not always standard or accessible in countries like Bangladesh.

In Dhaka's crash, victims burned to death inside the school. There was no immediate disaster management response. Metro Rail was used to transport victims four hours after the incident—with only one coach allocated. Ambulances were stuck in traffic. Nearby rickshaws and private cars often refused to help. In that critical golden hour, most of the children who died could potentially have been saved.

Imagine a future where an AI system immediately classifies an incident's severity—from Level 1 (minor) to Level 3 (severe)—based on casualty estimates, proximity to medical facilities, traffic congestion, and emergency response availability. Based on this, the system could: i) notify hospitals, fire stations, and police within seconds; ii) activate metro rail or ferry systems to serve as emergency transport; iii) block traffic routes in real time, just as roads are cleared for VIPs today; and iv) command nearby private vehicles to assist in transport—with government compensation issued later via digital tracking.

Even rickshaws and CNGs could be part of a national emergency fleet with built-in GPS coordination. Government investment in such AI-based disaster protocols could dramatically reduce fatalities—not just in plane crashes, but in fires, industrial accidents, and floods.

Globally, military flight tests occur in remote, spacious zones. In the US, bases like Edwards Air Force Base cover thousands of acres for flight testing. China frequently conducts air drills in its vast airspace, including coastal zones like Shandong. Even India tests aircraft over remote desert or mountainous zones.

Bangladesh, however, uses the skies above Dhaka—one of the most densely populated cities in the world—for its flight training. This must change. Future bases should be relocated to less populated regions, with enforced flight corridors and emergency landing zones. It's not just about modern aircraft; it's about responsible geography.

Bangladesh is not resource-rich, but it is youth-rich, with 33 percent of its population aged between 18 and 35, many of whom are enrolled in STEM programmes. Rather than continuing to import costly and difficult-to-maintain foreign equipment, the country should prioritise investing in AI education and workforce development. Building national digital twin platforms for aviation, manufacturing, and infrastructure can lay the foundation for smart, adaptive systems. Local industries should be encouraged to produce essential components and intelligent technologies, which would simultaneously create jobs in fields such as predictive analytics, aerospace simulation, and disaster modelling.

To support this ecosystem, a national disaster coordination network powered by AI is essential. Key policy recommendations include adopting digital twin technology for military aircraft maintenance, mandating fire-resistant interiors and delay-suppression systems in aircraft and public buildings, and establishing AI-based disaster response systems with real-time communication and routing capabilities. Moreover, redesigning flight zones to avoid dense urban centres, incentivising the use of private and public vehicles during emergencies, and using AI to model urban vulnerabilities and guide dynamic zoning policies would collectively strengthen national resilience.

The crash in Dhaka was tragic. But if it becomes just another event on a long list of avoidable disasters, then we are complicit in the next one. As a Bangladeshi-born researcher working to make aircraft safer, I believe the answer lies not in outrage—but in AI-driven transformation.

Md Manjurul Ahsan is research assistant professor at the University of Oklahoma.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

AI or human: Who wrote the article and does it affect credibility?

Shishir Moral Dhaka
Published: 02 Aug 2025, 17: 45

1754179615571.webp

Deutsche Welle

Whether if it’s a press release or a corporate statement—how credible it appears to readers depends on who they believe wrote it. If they are told a human wrote it, readers tend to find it much more trustworthy. But if they are told it was written by artificial intelligence (AI), their trust significantly decreases.

This is the finding of a study conducted by the University of Kansas in the US. The research report was recently published in the international journal ‘Corporate Communications: An International Journal’.

The study was conducted by Associate Professor Cameron Piercy, PhD researcher Aiman Alhammad, and Assistant Professor Christopher Etheridge of the University of Kansas’ Department of Communication Studies.

The aim of the research was to answer an important question, does knowing whether a piece of writing was produced by a human or AI change how readers perceive it?

AI is increasingly becoming a part of daily life, and people are constantly exploring new ways to use it. These uses can have both positive and negative impacts. Often, the use of AI is not disclosed.

The idea for this research came from a communication class at the University of Kansas, where students explored whether they could tell the difference between writing produced by AI and by humans.

Co-researcher Cameron Piercy explained, “Even if readers can’t distinguish between human and AI writing, we wanted to know whether their reaction changes if they’re told who the author is.”

Research methodology

The study used a multi-method approach. Participants were presented with a fictional corporate crisis scenario: some customers became ill after consuming products from a company named ‘Chunky Chocolate Company’. The cause was deliberate misdoings by some employees.

Participants were then given three types of press releases to read: informative, empathetic, and apologetic. In each case, they were told whether the message was written by a human or by AI.

Human-writings more trusted

The results showed that participants who believed a human wrote the press release found the message far more credible and effective. But when told the message was generated by AI, readers tended to respond more neutrally or with suspicion.

However, the researchers found little difference in readers’ reactions based on the type of message—whether it was informative, apologetic, or empathetic.

Accountability lies with humans

The researchers noted that when the identity of the writer is not disclosed in a corporate statement, during a crisis or during other times, readers instinctively question who is responsible.

Co-researcher Christopher Etheridge said, “If you're using AI in writing, you must maintain transparency, take responsibility for any errors, and be prepared for the readers’ reactions.”

The researchers concluded that even if AI is used during a crisis, it cannot replace human accountability, editorial judgement or transparency.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Paradox of AI’s humanity
by Obaidul Hamid 02 August, 2025, 00:00

1754182435316.webp


A FEW days ago, I tried to access DeepSeek on my mobile phone. I was taken to a verification page, which wanted to make sure I was a human, not a robot or a mindless automaton. This prompted some thoughts about AI, humanity, and their relationships. I had to prove that I was human to access the large language model originating from China. Does that mean that AI (or a robot) is not allowed to access AI? What would be wrong if an AI chatbot or another technological entity accessed an AI platform such as DeepSeek?

I am not sure if the question of AI’s (non)human credential is a resolved matter. Is it possible to admit AI into the homo sapiens club? Or should we treat it just as a machine? Can it fall somewhere in between? I am not thinking of humanoid, which is probably not relevant.

I thought I better ask AI itself how it would like to be treated—as a machine, a human, or a hybrid, which we might call machian.

I opened Microsoft Copilot on my device and wrote this prompt: ‘Can I ask a silly question?’ Its reply was human-like with a smiley at the end: ‘There’s no such thing as a silly question here — ask away! I’m all ears.’ You see, it’s very conversational, friendly, and ever willing to listen and help! Such a reply would be given by an educated and enlightened human speaking very good English.

Algorithmically encouraged, I next asked Copilot, ‘Should we treat you as human or machine or something/someone else?’ This is what it said in response:

‘That’s a great question—and not silly at all!’

‘You should treat me as a machine, specifically an AI assistant designed to help with information, tasks, and conversation. I’m not human, and I don’t have feelings, consciousness, or experiences. But I’m built to understand and respond in a way that feels natural, friendly, and helpful—like chatting with a knowledgeable companion.’

It continued: ‘Some people like to think of me as a tool, like a super-smart calculator or search engine with personality. Others treat me more like a collaborator or digital coworker. It really depends on what works best for you!’

The answer is detailed and helpful, but it’s also equivocal. AI is a machine, but it has ‘personality,’ which I thought only humans could have. It’s a tool, but, at the same time, it can be a collaborator or coworker. As such, will it demand fairness and respectful treatment, as demanded by human colleagues? Can we have social relationships with AI and greet when we meet, for example?

I read somewhere that we shouldn’t say ‘Thank you’ to a large language model (LLM) because this is a waste of words and time. We don’t need to do this to a machine. However, how can we not say ‘Thank you’ when someone is so helpful — human or machine?

I repeated my inquiry with ChatGPT, the most popular LLM. And its first response also came with a smiley: ‘Of course! Silly questions are often the most fun — ask away.’

When I used the next prompt (Should we treat you as human or machine or something/someone else?), this is what I had: ‘That’s a great question — not silly at all, actually.’

So, ChatGPT followed the same conversational norms as Copilot. Its substantive response was this:

‘You should treat me as a machine — more specifically, an AI developed by OpenAI. I’m not human, I don’t have feelings, consciousness, or personal experiences. But I am designed to understand and respond to human language in a helpful, respectful, and conversational way.’

ChatGPT claims no humanity, but it can behave like a human. This is how it’s created, it says. But there is still some confusion. It further clarified:

‘If thinking of me like a really smart tool or assistant helps, that’s perfect. You can also think of me as a kind of ‘thinking interface’ — I’m here to help you think things through, create, solve problems, or just chat.’

I can understand the ‘tool’ and ‘assistant’ metaphors, but its capacity to think and chat kept me hooked. ChatGPT’s conclusion is, ‘So: not a human, not quite a traditional machine — but definitely not someone with a mind of their own.’

This seems to be less equivocal. The machine identity is emphasised; it made no claims of personality traits such as human feelings, consciousness, or experiences. However, it’s also not a machine in a traditional sense.

My final inquiry participant was DeepSeek, which had the following response to my first prompt, also given with an emoji: ‘Of course! Silly questions are often the most fun—and sometimes the most insightful. Ask away, and I’ll do my best to answer with enthusiasm (and maybe even a little humour).’

When I asked the main question, it said, ‘Great question — and not silly at all! You can think of me as a very advanced tool with a conversational interface.’

DeepSeek projected a hybrid identity — a tool that can also converse. It also elaborated the answer using two subheadings:

‘Machine: At my core, I’m an AI — a language model trained on vast amounts of text. I don’t have consciousness, feelings, or human experiences. I process input and generate responses algorithmically.’

This is a clear denial of human properties. However, it is also ‘human-like’ and can talk, joke, show sympathy, and give the feeling of being natural. This can be noted from the next subheading:

‘Human-like: That said, I’m designed to simulate human conversation, so I can chat, joke, and empathise in a way that feels natural. But it’s all pattern recognition, not a genuine understanding.’

DeepSeek also concluded that we could treat it the way we feel comfortable.

Regardless of what these popular LLMs say, our treatment of AI seems to be unclear. As an academic, I can provide examples from academic publishing.

Some journals have already given co-authorship to AI in papers that use AI to a degree that probably meets the authorship criteria. One thing is, of course, clear: AI can only be a co-author; it can’t be the first or corresponding author.

Giving authorship to AI can be an instance of academic fairness. This is important, as we know of cases of authorship fraud or unfair distribution of authorships, particularly involving supervisors and research students. Sometimes institutional power determines authorship, which is unacceptable. For example, during my last trip to a South Asian country, a colleague shared a story about an executive of a university demanding a researcher from another institution to write a journal paper. The researcher ghost-wrote the paper, and thus the executive became the sole author of the work without writing a single word. That was not the end of the surprise. The executive also received the honorarium for this publication, in which they had zero input. The actual author received neither money nor recognition. Hopefully, AI authorship questions will prompt us to be fair to human authors.

However, some other journals have refused to accept AI authorship on valid grounds. All authors are held responsible for their contribution to any publication. If anything goes wrong with a paper, can AI be held accountable for it? The answer is no. We cannot penalise, chastise, or punish AI for any misconduct or incorrect information.

The human question of AI is also implicated when we talk about ethics, which is probably an exclusive terrain for humans. Ethics may not apply to non- or post-human entities. Nevertheless, ethics has been attributed to AI in various ways. We often hear how AI can ensure fairness, how it can address inequality and disadvantage, and how it can create a level playing field in education and other domains. AI bias has also drawn our attention in terms of language, knowledge, and data, sharing human prejudice.

I recall a relevant point from a recent conference that was held in an Asian city. One of the invited speakers shared her research involving students working with AI in groups. The presenter highlighted the question of students being unfair to AI. She reported that a three-member group in which AI was one had an equal distribution of workload. However, when giving credit for input, the student members agreed to give only about 10 per cent to their AI collaborator. This was an instance of being unfair to AI.

Our conceptualisation of AI probably needs more — and perhaps radical — thinking. Calling it a machine denies its human-like character. Calling it human, on the other hand, may be ridiculous because it doesn’t have human flesh, blood, consciousness, or feelings. At the same time, however, we can’t deny that in infospace it can think and work better and faster than humans. Human intelligence created AI, which can now outsmart humans. But it still needs a human to be able to function like — or better than — human.

As AI becomes more prevalent in our lives, we need to work out its nature, identity, and human connection. Its role in human endeavours needs to be described carefully, practically, and ethically.

Obaidul Hamid is an associate professor at the University of Queensland in Australia. He researches language, education, and society in the developing world. He is a co-editor of Current Issues in Language Planning.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

OpenAI unveils GPT-5 with major upgrades in reasoning, writing and safety

1754706379482.webp

OpenAI has announced the release of GPT-5, its most advanced artificial intelligence model to date, promising substantial gains in reasoning, accuracy and real-world utility.
GPT-5. Image: OpenAI

OpenAI has announced the release of GPT-5, its most advanced artificial intelligence model to date, promising substantial gains in reasoning, accuracy and real-world utility. The system becomes the new default model for ChatGPT users, replacing earlier versions including GPT-4o and OpenAI o3.

The company describes GPT-5 as a "unified system" comprising a smart base model, a more powerful reasoning mode dubbed "GPT-5 thinking," and a real-time router that determines which component to use based on the task's complexity or user intent. This enables the system to shift between fast replies and deeper analysis automatically, or when users prompt it to "think hard" about a query.

Broader capabilities and improved performance

According to OpenAI, GPT-5 outperforms previous iterations in core areas such as writing, coding, health queries, and multimodal reasoning. It is also claimed to deliver faster responses with fewer factual errors and significantly reduced hallucination rates—particularly when using its higher reasoning mode.

In benchmark testing, GPT-5 demonstrated top-tier results:

94.6% on the AIME 2025 competition-level maths test

88% accuracy on Aider Polyglot for multi-language code editing

46.2% on the challenging HealthBench Hard evaluation

GPT-5 reportedly excels in building responsive apps and websites from single prompts when it comes to coding. OpenAI shared examples where the model created full games and interactive experiences, demonstrating design sensibility in layout and typography as well as functionality.

In creative tasks, GPT-5 is said to offer more nuanced writing support, including improvements in poetry, structural editing, and literary expression. When prompted to write an emotional poem, the model's version was noted for its vivid imagery and stronger emotional resolution compared to previous outputs.

A more helpful and reliable assistant

The model introduces several enhancements aimed at making interactions safer and more useful. These include:

Reduced hallucinations: GPT-5's factual error rate is said to be 45% lower than GPT-4o, and up to 80% lower than OpenAI o3 when using the reasoning mode.

Less sycophancy: OpenAI reports that GPT-5 is less likely to flatter or agree uncritically, addressing a longstanding issue with language models.

Honest limitations: In tests involving impossible or underspecified tasks, GPT-5 more reliably recognises its own limits rather than providing misleading or overconfident responses.

In health contexts, GPT-5 is described as a better "thought partner", adapting to the user's level of knowledge and local context to provide safer and more personalised information. OpenAI emphasised, however, that GPT-5 does not replace medical professionals.

Pro version and customisation

The company also introduced GPT-5 Pro, a premium variant with extended reasoning capabilities designed for the most demanding tasks. Early testing shows it made 22% fewer major errors and was preferred over GPT-5 standard in nearly 68% of expert evaluations.

In addition, OpenAI is piloting four new personality presets—Cynic, Robot, Listener, and Nerd—designed to give users more control over the AI's tone and interaction style.

Rollout and access

GPT-5 begins rolling out today to ChatGPT Plus, Pro, and Team users, with Enterprise and education accounts gaining access next week. Free-tier users will also receive limited access, though usage limits apply. Once these are reached, responses will be handled by a lighter GPT-5 mini model.

The system was trained on Microsoft Azure supercomputers and incorporates multiple layers of safety protocols, particularly in sensitive domains like biology. OpenAI says it has taken a precautionary approach to capabilities in these areas, integrating monitoring systems and extensive red-teaming.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

AI in next-gen policing
by Md Motiar Rahman 13 August, 2025, 00:00

1755048228424.webp


CRIME detection and law enforcement have undergone significant evolution with the advancement of technology. In many developed countries, artificial intelligence has revolutionised policing by improving crime prevention, investigation and response mechanisms. AI-based tools such as predictive analytics, facial recognition, automated surveillance and natural language processing have enhanced the efficiency of law enforcement agencies.

For Bangladesh police, where crime rates fluctuate due to social, economic and political factors, the integration of AI could offer faster, more accurate and data-driven crime detection. It is imperative to explore how AI can be utilised in crime detection in Bangladesh, the potential benefits, the challenges and a future road map for AI-driven law enforcement.

Predictive policing

AI-POWERED predictive policing and crime mapping tools can significantly enhance the effectiveness of the Bangladesh police by analysing vast amounts of historical crime data to forecast where and when crimes are likely to occur. By generating heat maps and identifying emerging crime trends, AI can support strategic deployment of police resources in high-risk urban areas such as Dhaka, Chattogram, Gazipur and Narayanganj. Machine learning algorithms, for instance, can detect patterns in burglary, theft and drug trafficking based on past incidents, enabling law enforcement to act preemptively. Although Bangladesh has not yet formally adopted advanced systems like PredPol — used by Los Angeles to identify crime-prone areas — pilot initiatives, such as the Dhaka Metropolitan Police’s Crime Data Management System, provide a promising foundation for integrating AI-driven approaches into policing.

Facial recognition, biometric identification

AI-POWERED facial recognition and biometric identification technologies offer significant potential to enhance policing in Bangladesh by enabling the identification of suspects through CCTV footage, national ID databases and even social media images. These tools can assist in tracking fugitives and missing persons, preventing identity fraud and strengthening airport and border security — particularly in sensitive areas like Teknaf, Benapole and Tamabil, where human trafficking and illegal migration are persistent threats. By matching images captured from surveillance systems or mobile devices with official databases, law enforcement agencies can respond more swiftly and accurately. However, the implementation of such technologies must be approached with caution, as the risk of misuse, wrongful arrests and inadequate oversight could erode public trust and raise serious concerns about privacy and accountability.

Recently, the Police Bureau of Investigation has revealed that the recent fire at the BIAM Foundation building in Dhaka was a deliberate act of arson intended to destroy sensitive documents of the BCS (Administration) Welfare Multipurpose Cooperative Society. Initially believed to be caused by a faulty air conditioner, the case took a dramatic turn when PBI, using AI-driven video analysis, identified Ashraful Islam disabling the CCTV cameras before the fire and later arrested him in Kurigram. His confession led to the arrest of Md Zahidul Islam, an administrative officer at BIAM, who allegedly masterminded the plot with Ashraful, office assistant Abdul Malek, and driver Md Faruk, with the group promised Tk 10–12 lakh to destroy the documents. Tragically, an explosion during the act killed Malek instantly and fatally injured Faruk. This case marks a significant breakthrough in Bangladesh’s law enforcement, demonstrating the transformative role of AI in uncovering complex crimes and shifting the investigation’s focus to uncovering the nature of the destroyed documents and identifying any additional conspirators.

Automated surveillance, anomaly detection

AUTOMATED surveillance and AI-powered anomaly detection systems are increasingly being integrated into traffic cameras, ATM surveillance networks, and broader security infrastructures to identify suspicious activities without human intervention. These advanced video analytics tools can detect abnormal behaviour in public spaces, such as unauthorised gatherings, potential riots or terrorist threats. Drones equipped with AI-powered cameras are also being deployed for crowd monitoring and emergency response, providing real-time situational awareness. In major transit points like Kamalapur Railway Station, Hazrat Shahjalal International Airport, and critical traffic intersections, AI-driven surveillance can recognise anomalies such as sudden crowd surges, unusual movements or unattended objects, enabling quicker and more targeted security responses. In the 2026 national election, the use of drones could offer potential advantages for monitoring unlawful gatherings and group violence.

Cybercrime investigation

WITH the rise of digital threats, AI is becoming an essential tool in cybercrime investigation by enabling law enforcement to detect patterns of fraud, hacking and online harassment more efficiently. AI systems can analyse activities on social media and the dark web to uncover cyber-criminal networks, identify and prevent financial fraud, digital identity theft, and phishing scams, and assist forensic teams in decrypting compromised systems or examining digital evidence. However, as AI empowers police efforts, it simultaneously offers new tools for criminals. In Bangladesh, emerging AI-enabled crimes pose significant challenges: deepfake technology can produce fabricated videos or audio impersonating politicians, police officers or business leaders, potentially causing unrest, especially during sensitive periods like elections. Similarly, AI-driven phishing and social engineering — particularly in Bangla — are being used to manipulate individuals into disclosing confidential information. Moreover, AI is increasingly being used in orchestrating automated financial fraud, including investment scams and illicit activities in mobile banking and cryptocurrency, reflecting a growing concern for Bangladesh’s digital security landscape.

In recent years, social media platforms such as Facebook, YouTube, TikTok and other digital outlets have increasingly become arenas where individuals — often under the guise of free expression or anonymous profiles — launch malicious attacks against dignitaries, political figures, public servants and government officials. These attacks frequently take the form of videos, livestreams, and posts that employ vulgar, obscene and highly offensive language. The content often crosses the boundary of criticism into personal vilification, character assassination and misinformation. More disturbingly, there is a growing trend of digital blackmail, where private information, photos or fabricated content is used as leverage to extort money, favours, or silence from victims. Such actions not only harm the individuals targeted but also undermine the dignity of public institutions and the rule of law.

The lack of digital literacy, weak enforcement of cyber laws and the absence of a clear framework for accountability on online platforms have further contributed to this toxic environment. If left unchecked, this trend poses a serious threat to public trust, social harmony and the integrity of governance. The growing menace of online abuse, defamation and blackmail targeting dignitaries and public officials in Bangladesh can be addressed through a balanced and multi-dimensional strategy focusing on the resourcing of cybercrime investigation units, which is essential to identify and prosecute offenders without infringing on freedom of expression.

Recently, a fake video statement was circulated on social media platforms using a still photograph of the inspector general of police of Bangladesh combined with an artificially generated voice. In this video, advanced AI technology was used to clone the IGP’s voice and deliver a fabricated and misleading message, giving the false impression that it was an official communication. This deliberate misuse of AI constitutes a serious cyber offence under the existing laws of Bangladesh, including those related to digital fraud, identity misuse and incitement. Such malicious activities not only aim to mislead the public but also create unnecessary sensation, confusion and distrust. They can incite public resentment and potentially threaten national security and public order.

Potential benefits of AI

INTEGRATING artificial intelligence into crime detection and law enforcement holds significant promise for the Bangladesh police, offering a transformative approach to improving investigative accuracy, operational efficiency, public safety and citizen engagement.

One of the most immediate advantages of AI is its ability to speed up investigations while enhancing accuracy. Traditionally time-consuming processes such as analysing forensic evidence, reviewing CCTV footage or identifying suspects can now be completed rapidly through AI-powered tools. These systems can sift through massive datasets, detect patterns and establish links between incidents, thereby accelerating case resolution and improving the quality of investigations.

AI also enhances the overall efficiency of law enforcement operations. By analysing real-time data, AI can optimise patrol routes, traffic flows and emergency response coordination. This data-driven approach ensures that resources are deployed where they are most needed, reducing waste and lowering operational costs while maximising impact on the ground.

In the realm of counterterrorism, AI proves indispensable by enhancing intelligence gathering and threat analysis. It can monitor social media for signs of radicalisation, track suspicious financial transactions, and analyse call records to uncover extremist networks. This strengthens the capacity of the Bangladesh police to detect and disrupt terrorist plots before they materialise.

Lastly, AI can play a constructive role in fostering better police-community relations. Tools such as AI-powered chatbots and intelligent complaint management systems offer citizens a more efficient and transparent way to communicate with law enforcement. By handling non-emergency queries, providing status updates and reducing wait times at police stations, these systems can build public trust and promote greater accountability within the police force.

To be continued............
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Members Online

⤵︎

Latest Posts

Latest Posts