0

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

Press space to scroll through posts
G Bangladesh Defense
[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
55
2K
More threads by Saif


Without AI, Bangladesh risks falling behind: experts
BPO summit begins in Dhaka

1750554550499.png


If business process outsourcing (BPO) companies in Bangladesh fail to adopt technologies like artificial intelligence, machine learning, and large language models, they will fall behind in global competition, experts warned today.

"Technological advancement in the past two years has surpassed all previous eras of innovation," said Faiz Ahmad Taiyeb, special assistant to the chief adviser with executive authority over the posts, telecommunications and ICT ministry.

"If companies cannot adapt to this transformation, they may shut down within two years… They will be eliminated by default," he said.

"Especially for IT and ITES companies, there is no room to survive without embracing change. This failure will not only harm businesses but also damage the country's competitiveness," he added.

Taiyeb was addressing the inauguration of a two-day BPO Summit Bangladesh 2025 at Senaprangan, Dhaka.

Organised by the Bangladesh Association of Contact Center and Outsourcing (BACCO), the event bore the theme "BPO 2.0: Revolution to Innovation" this year, signalling a shift towards innovation-driven growth in the industry.

Taiyeb urged BPO companies to swiftly assess what peer nations like China, India, Vietnam and the Philippines are doing in AI adoption.

"Only then can you approach the government with informed policy demands," he said.

He emphasised that IT engineers must understand sectoral challenges, as technology now permeates every industry.

"The way Chinese companies are leveraging generative AI and accelerating business process upgrades—if we fail to keep pace, we must identify these gaps and bring them to the government's attention," said Taiyeb.

Bangladesh has set a target to generate $5 billion from IT exports by 2030.

"Sri Lanka, one-tenth our size, has set a similar goal. Yet our current annual export hovers at around $700 million to $800 million. We must double our IT exports every year—this is a shared national challenge," he said.

Taiyeb recommended providing export incentives of 8 percent to 10 percent for frontier technologies like AI, while offering 4 percent to 5 percent for legacy segments.

"This ensures that new tech is prioritised without overburdening the government," he said.

He predicted that legacy call centres would disappear within five years, transforming into AI and large language model (LLM)-powered operations. "This sector must embrace transformation now."

BACCO President Tanvir Ibrahim said, "The BPO Summit is not just an industry event—it is a collective declaration of our confidence, capability and future aspirations. We believe this summit will help empower the youth with technology-driven employment and entrepreneurship opportunities."

Adilur Rahman Khan, adviser to the interim government on industry and housing & public works, attended as chief guest.

"The BPO sector is no longer just about outsourcing—it symbolises human resource development and economic transformation. The government will provide full support for its growth," he said.

ICT Secretary Shish Haider Chowdhury and BACCO Secretary General Faisal Alim also spoke.

This year's summit features nine international seminars and workshops, a job fair, special sessions on entrepreneurship, freelancer platforms, and a large exhibition with domestic and international BPO and tech companies.

Diplomats, tech experts and global buyers are attending.

A major attraction this year is an "Experience Zone", showcasing cutting-edge technologies including Starlink satellite internet, immersive AR and VR simulations themed on the July uprising, advanced drones and submarine technologies, and robotics exhibitions.​
 

Shikho partners with Meta to launch AI literacy course in Bangladesh


1750985380767.png


Shikho, the edtech platform, has recently announced a partnership with Meta to develop and deliver a new AI literacy course aimed at broadening public understanding of artificial intelligence (AI).

According to Shahir Chowdhury, Founder and CEO of Shikho, it will be a "first-of-its-kind course on AI literacy in Bangladesh."

"The course that we are planning with Meta will be available for free on our Shikho platform around October 2025. The course will be available in Bangla," said Shahir.

The announcement comes shortly before the expected public rollout of Shikho AI, a tool that offers instant, curriculum-aligned academic support to users across the country.​
 

The West’s AI dominance is the new colonialism

1751072201270.png

VISUAL: FREEPIK

In the colonial era, empires extracted gold, rubber, and labour. Nowadays, data and cheap click-work are the new spoils. The West's AI boom thrives on digital colonialism, where personal data and low-wage labour from the Global South fuel its wealth. Silicon Valley firms harvest user information while outsourcing tasks to Nairobi and Manila, replicating colonial power dynamics.

The AI revolution mirrors past hierarchies, with the Global North monopolising benefits and the Global South as a raw material provider. Western dominance in AI is a new form of imperialism. The question is whether the Global South can fight back, and if so, how, before this digital subjugation deepens further?

Although it could be thought of by many as "the great equaliser," AI perpetuates old inequalities. Big Tech and Western nations are extracting value from the Global South in familiar ways. Tech giants steal tweets, photos, and voices from the Global South to feed their AI. People in places like Nairobi and New Delhi contribute vast amounts of data without compensation, while the flow of this data travels outward, much like the colonial trade routes that once drained resources from the periphery to the West.

In the Global South, millions of workers perform invisible labour for AI companies, earning as little as $1.50 per hour for tasks like moderating disturbing content or annotating data. In Kenya, India, and Latin America, AI's growth reinforces colonial-like dynamics, with low-paid workers bearing the brunt of exploitation. In Kenya, workers filter toxic content for OpenAI, while locals help train AI models for self-driving cars in Uganda and Nigeria.

Indian tech workers face similar conditions, often working as gig labourers without job security. Skilled researchers are syphoned off to Silicon Valley, mirroring colonial extraction.

Additionally, the Global South's dependence on Western AI platforms like OpenAI's GPT or Google's cloud services can also be seen as colonial economic dependency. Only five percent of Africa's AI talent have access to the computational power and resources needed to carry out complex tasks, known as compute. The African AI researchers lack access to sufficient computing resources, forcing them to rent American servers or use free tools, trapped in a cycle of subjugation.

AI colonialism goes beyond economics; it also involves language and cultural erasure. Big Tech's AI silences half the world's languages. They function best in English, Chinese, or European languages, while many African and South Asian languages are poorly represented. For example, until community-driven projects stepped in, languages like Hausa and Bangla had little AI support despite having millions of speakers. This neglect leads to AI colonising cognitive space, where a Western digital framework overshadows local cultures. AI perpetuates a Western worldview in subtle ways, ignoring diverse local realities.

In Latin America, the AI imbalance is visible in their importing of surveillance tech from Western firms. Countries like Brazil and Ecuador use facial recognition and biometric systems with little oversight, echoing the colonial past when foreign powers controlled domestic security. These examples expose a consistent pattern: the Global South provides labour, data, and markets, while Western firms retain control, profits, and power. AI's benefits flow upwards, while the burdens remain on those at the bottom, which is the age-old system of exploitation.

Major US companies like Google, Amazon, and Microsoft control most AI patents, cloud infrastructure, and models, creating dependency for the Global South. Western nations, especially the US, use their power to restrict access to critical AI resources through export controls on semiconductors and by blocking services like OpenAI in sanctioned countries. This is similar to colonial-era tactics, where industrial technologies were withheld from colonised nations to maintain control.

Ironically, the West often presents itself as the champion of AI ethics and human rights, but its enforcement of these principles is selective and self-serving. Tech companies promote ethical AI guidelines at home, only to look the other way when their products are used abusively abroad. For instance, Western firms with ethical codes supply surveillance AI to regimes with poor human rights records, profiting from opaque deals in the Global South. These companies have a responsibility to respect human rights, yet their tools frequently enable the oppression of journalists and dissidents. The West also dominates AI governance forums, sidelining Global South voices in shaping the rules of AI. This results in hypocritical rules written by the powerful, often imposed on developing nations with little consultation.

Despite this oddity, resistance is emerging. Grassroots projects like Mozilla's Common Voice are crowdsourcing speech data in underrepresented languages like Hausa and Tamil, allowing local developers to create tech that reflects their culture using their mother tongue. Karya, an ethical data company in India, is redefining AI labour by paying workers fair wages, granting them ownership of their data, and ensuring they benefit from its resale.

Countries are also uniting for AI cooperation, with the BRICS bloc and other emerging economies forming alliances to reduce dependency on Western tech. Although nascent, it is a world where no single empire controls AI. Data sovereignty laws are also gaining traction, as nations like Nigeria, Kenya, and Ghana pass legislation to keep personal data within their borders to boost their local tech industries. In Latin America, countries like Brazil and Chile are discussing "data localisation" as part of their digital strategies.

Open-source AI efforts, such as Chile's Latam GPT, tailored for Spanish, and Africa's Masakhane project aim to strengthen Natural Language Processing (NLP) and create inclusive and locally relevant AI models. These efforts are breaking Big Tech's monopoly and showcasing the Global South's potential to develop technology on its own terms.

The stakes in the AI struggle are already high as a new digital empire is being formed, dominated by a few nations and companies. However, the Global South has the numbers, talent, and moral authority to demand change. It must mainly push for regulations against data exploitation, secure a seat in AI governance, and build local capacity for independent innovation. Meanwhile, the West must confront the unsustainable nature of AI's colonial-style inequity. History shows empires fall when people resist. With collective action, AI can be for all, not just the privileged few.

Shaikh Afnan Birahim is a postgraduate student of Computing Science at the University of Glasgow in Scotland.​
 

Meta launches ‘AI Live Skilling Program’ in Bangladesh

FE ONLINE DESK
Published :
Jun 28, 2025 15:05
Updated :
Jun 28, 2025 15:05


1751155004588.png


Meta, in partnership with LightCastle Partners, an international management consulting firm, officially launched its Live Skilling initiative in Bangladesh.

Meta, the parent company of Facebook, launched the Live Skilling initiative through a high-level event titled ‘Enabling AI for Small Enterprises: Policy, Innovation, and Inclusion’ at the hotel Amari in the capital’s Gulshan area on Thursday.

The event convened key government officials, tech leaders, and private sector stakeholders to explore how responsible AI adoption can address national development priorities and strengthen Bangladesh’s digital economy, says a media release.

The event began with opening remarks from Ruici Tio, Regional Lead, Safety & Integrity Policy Programs (APAC) at Meta, who introduced Meta’s suite of AI tools and officially launched the Meta Live Skilling initiative in Bangladesh.

“We’re proud to announce the launch of Meta Live Skilling, with LightCastle Partners, to support businesses in Bangladesh with training and on-demand resources on Meta's AI tools,” said Simon Milner, Vice President of Public Policy, Asia-Pacific at Meta.

“This is part of a suite of initiatives we're launching in Bangladesh to support the AI ecosystem and unlock economic opportunity for businesses, no matter where they are along their growth journey.”

This was followed by a product demonstration by Shahir Chowdhury, CEO and Founder of Shikho, showcasing how Bangladeshi edtech companies are already using AI to transform digital learning and improve accessibility.

The centrepiece of the event was a multi-stakeholder panel discussion moderated by Bijon Islam, Co-founder and CEO of LightCastle Partners.

The distinguished panellists included Shihab Quader, Director General for SDGs, Government of Bangladesh; Sahariar Hasan Jiisun, National Consultant, Aspire to Innovate (a2i), ICT Division, Government of Bangladesh; Ruzan Sarwar, Public Policy Manager, Meta; Shahir Chowdhury, Founder & CEO, Shikho.

“For AI to truly serve small enterprises, our regulatory frameworks must be not just protective, but enabling,” said Shihab Quader. “Better regulation means clearer guidance, ethical guardrails, and space for responsible innovation to thrive. Small businesses need both confidence and opportunity to harness AI — our job is to make sure they have both.”

The panel explored how AI can help solve pressing economic challenges—from increasing SME productivity to improving service delivery—and what policy, infrastructure, and institutional alignment are needed to enable inclusive AI adoption across sectors.

More than 75 guests, including senior officials from government and private organisations a
ross sectors.

More than 75 guests, including senior officials from government and private organisations as well as technology experts, were present at the event.

Speakers also urged policymakers to address critical technological challenges and set priorities to fast-track national development through AI.

Following the panel, a fireside chat with Simon Milner was moderated by Oli Ahad, Founder and CEO of Dhaka AI Labs. The conversation spotlighted the role of open-source models, the importance of collaboration, and the promise of responsible AI innovation for driving long-term economic growth in Bangladesh.

The launch of Meta Live Skilling marks a significant step in Bangladesh’s AI journey, aimed at equipping local businesses, educators, and innovators with the tools and training needed to responsibly leverage AI for real-world impact.​
 

AI learning to lie, scheme, threaten creators
Agence France-Presse . New York, United States 29 June, 2025, 21:11

The world’s most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of ‘reasoning’ models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

‘O1 was the first large model where we saw this kind of behaviour,’ explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate ‘alignment’—appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, ‘It’s an open question whether future, more capable models will have a tendency towards honesty or deception.’

The concerning behaviour goes far beyond typical AI ‘hallucinations’ or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, ‘what we’re observing is a real phenomenon. We’re not making anything up.’

Users report that models are ‘lying to them and making up evidence,’ according to Apollo Research’s co-founder.

‘This is not just hallucinations. There’s a very strategic kind of deception.’

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access ‘for AI safety research would enable better understanding and mitigation of deception.’

Another handicap: the research world and non-profits ‘have orders of magnitude less compute resources than AI companies. This is very limiting,’ noted Mantas Mazeika from the Center for AI Safety.

Current regulations aren’t designed for these new problems.

The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.

‘I don’t think there’s much awareness yet,’ he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are ‘constantly trying to beat OpenAI and release the newest model,’ said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

‘Right now, capabilities are moving faster than understanding and safety,’ Hobbhahn acknowledged, ‘but we’re still in a position where we could turn it around.’

Researchers are exploring various approaches to address these challenges.

Some advocate for ‘interpretability’ - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour ‘could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.’

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed ‘holding AI agents legally responsible’ for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.​
 

AI dominates pre-election campaigns in Bangladesh: Report

1751331546558.png

Image: Zarif Faiaz/The Daily Star.

Artificial intelligence-generated content is playing a growing role in the pre-election political landscape in Bangladesh, with a wave of synthetic campaign videos appearing across Facebook and TikTok, according to a recent report by the digital research outlet Dismislab.

The videos, many of them generated using Google's Veo 3 text-to-video tool, feature entirely fictional characters - from rickshaw pullers and fruit sellers to middle-class professionals and religious clerics - voicing support for political parties ahead of the 2026 general election.

Researchers reviewed 70 such AI-generated videos published between 18 and 28 June, collectively amassing over 23 million views and one million interactions on social media platforms. On average, each video received approximately 328,000 views and 17,000 reactions, as per dismislab's findings.

While the Islamist party Jamaat-e-Islami appears to have initiated the trend, the technique has since been adopted by supporters of rival parties, including the BNP and newly formed NCP.

Many of these videos do not carry AI-labelling, despite existing platform rules requiring disclosure.​
 

Bangladeshi coder wins global AI hackathon

1751331653550.png

From ideation to execution, the project was completed during the AI Showdown Hackathon held on June 6–8, 2025. More than 239,000 apps were started, and 5,118 were submitted for judging. Images: Courtesy

Rezaul Karim Arif, a Bangladeshi network engineer based in Melbourne, has built PixelFlow, an AI-powered web application that recently won the AI Showdown Hackathon, a global competition hosted by AI coding platform Lovable with support from Google, Anthropic, and OpenAI.

Originally from Chattogram, Rezaul moved to Australia in 2015 to study IT at La Trobe University. Currently working as a network engineer, he also runs an automation and AI development agency that works with both local and international clients. According to Rezaul, his background in freelance design helped shape the vision for PixelFlow, which he now plans to scale.

From ideation to execution, the project was completed during the AI Showdown Hackathon held on June 6–8, 2025. More than 239,000 apps were started, and 5,118 were submitted for judging. Rezaul's project stood out for both its technical design and real-world usefulness. He earned $20,000 in prize money, receiving $10,000 from Anthropic for best use of Claude 4 in the first round and another $10,000 for being the overall winner in the final. The event was fully sponsored for 48 hours, giving all participants free access to premium features and tools.

PixelFlow is an infinite mood-boarding app designed to help users quickly turn abstract ideas into visual layouts. Rezaul says that unlike other tools in the market like Milanote or Miro, PixelFlow is made to be simple and easy to use. "It's like a visual notepad," Rezaul explained, "You open it and start building without needing to learn the tool first." The app is aimed at designers, marketers, and creative professionals who want a quick and simple way to organise their ideas.

Built in only 24 hours, PixelFlow uses several modern technologies, including React 18, TypeScript, Supabase, and Fabric.js. Rezaul also added browser-based AI features such as background removal using Hugging Face's transformer model, and image generation using HiDream-I1 hosted on Google's Vertex AI. He used Anthropic's Claude 4.0 model throughout the development as his main tool for coding support.

In the project, Rezaul also used different custom AI agents. According to him, each agent worked like a team member in a software development project, playing roles such as tester, designer, or product manager. "I break big features into smaller tasks and guide the AI step-by-step," he said. With help from Lovable's AI platform and tools like Windsurf, he was able to finish the design and functions with very little manual work.

While many apps use AI, Rezaul believes PixelFlow stands out because of its smooth user experience. "Most moodboarding apps have too many features and are hard to use. PixelFlow is simple by design, with a good UI and UX."

Although Rezaul has received several investment offers after winning, he plans to continue building the project on his own first. "I prefer to build something strong before taking outside funding," he said. His goal is to add more useful features, turn PixelFlow into a full SaaS product, and explore real commercial uses for it in the growing AI creative tools market.

"I want more Bangladeshis to explore this field," he added. "It's not rocket science, just clever system prompting."​
 

Intelligence before AI
Muttaki Bin Kamal 03 July, 2025, 00:00

IN A world increasingly driven by artificial intelligence, we find ourselves captivated by machines that emulate what we call ‘intelligence’. But before we marvel at neural nets and language models, we must ask: whose intelligence are we referring to when we use the term? The rise of AI, paradoxically, urges us to reflect not just on technology, but on what intelligence itself truly is.

Historically, René Descartes and Ibn Rushd (Averroes) offer contrasting frameworks for understanding intelligence and consciousness — differences that remain highly relevant in today’s artificial intelligence debates. Descartes (1596–1650), often called the father of modern Western philosophy, grounded consciousness in subjective certainty: ‘Cogito, ergo sum’ (I think, therefore I am). For Descartes, intelligence is fundamentally internal, individual and rooted in the mind’s ability to reason independently of the body dualism that separates mental and material substance.

In contrast, Ibn Rushd (1126–1198), the great Islamic philosopher and commentator on Aristotle, proposed a radically different model. In his theory of the Unity of the Intellect, he argued that true intelligence is not housed in individual minds but rather in a universal agent intellect shared among all rational beings (Ibn Rushd, Long Commentary on the De Anima). Consciousness, for Ibn Rushd, emerges through participation in this shared rational structure, rather than isolated cognition.

These differences speak directly to the modern challenge of defining intelligence. Descartes’ legacy underpins many AI models that treat the mind as an abstract, computational system — isolated and programmable. Ibn Rushd’s vision, by contrast, aligns more closely with emergent, relational and distributed models of cognition, such as Clark and Chalmers’ (1998) ‘Extended Mind’ hypothesis, which posits that cognition can stretch beyond the brain into tools, environments and networks.

As we grapple with the question of artificial consciousness, these two doctrines remind us of a deeper issue: is intelligence a self-contained algorithm, or a shared process embedded in the world? In AI’s era, that distinction could shape how (or whether) machines can ever truly think.

Going further east, the Buddhist philosopher Nagarjuna, in The Questions of King Milinda, challenges the very basis of individual identity through the concept of Śūnyatā (emptiness). Nagarjuna argues that things exist only in relation to other things, not in themselves — intelligence, then, may not be an intrinsic property of individual beings, but something that emerges through relations and dependencies.

This brings us to collective intelligence, an idea far older than machine learning. Ant colonies exhibit astonishing collective behaviour, often described as swarm intelligence. Research shows that ants solve complex problems — such as pathfinding and resource allocation — through decentralised coordination. No individual ant ‘knows’ the solution, but together they act with uncanny precision.

Even more fascinating is the intelligence of fungi, particularly mycorrhizal networks, which interlink trees underground to exchange nutrients, warn of dangers, and coordinate behaviour. These fungal systems, described as the ‘Wood Wide Web’ by scientists like Suzanne Simard and Peter Wohlleben, show that intelligence can exist without neurons, brains or central control.

Social organisms consistently outperform solitary individuals. The complexity sciences show that systems composed of simple agents following local rules can produce outcomes far more advanced than any agent alone. This emergent intelligence is key to understanding how intelligence might be distributed, rather than localised.

This vision is echoed in Andy Clark and David Chalmers’ (1998) ‘Extended Mind Hypothesis’, which argues that the mind does not reside solely in the skull, but extends into tools, environments and other people. Our smartphones, notebooks and even relationships are part of our cognitive process. Reframing this alongside Nagarjuna’s emptiness and Ibn Rushd’s unity of intellect reveals a deeper pattern: intelligence is not in the thing — it’s in the relation.

Forests, with their interspecies fungal networks, are prime examples. Here, cognition exists not in a single organism, but in the entanglement of many. The fungal ‘mind’ spreads across roots and soil, shaping ecological decisions like resource allocation and species survival. Monica Gagliano’s experimental work on plant learning further disrupts the idea that brains are required for intelligence — plants ‘remember,’ adapt, and negotiate. Peter Wohlleben shows how trees ‘train’ their young and respond to distress signals, implying social behaviour far more complex than mere biology.

What all this reveal is that intelligence is becoming more problematized, more tangled than our Cartesian legacy ever imagined. Before we build artificial consciousness, we must ask: what is consciousness for? If organisms without brains — like plants and fungi — exhibit intelligent behaviour, then perhaps consciousness is not a computational achievement, but an ecological necessity.

Understanding intelligence, then, is not a preliminary step — it is the foundational one. Before adapting to artificial intelligence, we must reckon with what intelligence means across life forms, histories and systems. We must ask not how to build it, but why it arises, and what it serves. Only then can we responsibly approach the task of artificial consciousness — not as engineers of minds, but as students of life’s tangled existence.

Muttaki Bin Kamal is a doctoral candidate in anthropology at the University of Texas at San Antonio.​
 

Youths need to be equipped with entrepreneurial, technical skills in AI-era: Prof Yunus

Published :
Jul 14, 2025 21:00
Updated :
Jul 14, 2025 21:00

1752534214352.png


Chief Adviser Prof Muhammad Yunus has underscored the urgent need for equipping youths with entrepreneurial and technical skills to prepare them for the evolving job market and making contribution to building a modern, developed Bangladesh.

In a message issued ahead of World Youth Skills Day (July 15), Prof Yunus said skilled manpower is essential for accelerating national development, meeting the demands of a diverse industrial workforce, and increasing Bangladesh’s competitiveness in the global labor market, reports UNB.

The Chief Adviser said the interim government, formed in the wake of the July mass uprising, is committed to transforming youths into valuable national assets by fostering employment and eradicating inequality.

The theme of this year’s World Youth Skills Day is ‘Youth empowerment through AI and digital skills’.

In 2014, the United Nations General Assembly declared July 15 July as World Youth Skills Day, to celebrate the strategic importance of equipping young people with skills for employment, decent work and entrepreneurship.

July 15, 2025 marks the 10th anniversary since World Youth Skills Day (WYSD).

As the Fourth Industrial Revolution reshapes economies through Artificial Intelligence (AI), Technical and Vocational Education and Training (TVET) must evolve to equip youth with future-ready skills. AI is transforming how we live, learn and work, but it also poses serious risks if not implemented equitably, according to the United Nations.

Prof Yunus emphasised that modern technology must be leveraged to convert the country’s youth into skilled professionals, capable of meeting the challenges of a rapidly changing global economy.

He said the government has already undertaken multiple initiatives to provide demand-driven training and create employment opportunities at both domestic and international levels. “Alongside meeting local labor market needs, Bangladesh’s capacity to export skilled manpower will continue to strengthen.”

Prof Yunus also acknowledged the work of the National Skill Development Authority (NSDA), which is collaborating with public and private partners to build a sustainable skill development ecosystem.

“I hope World Youth Skills Day 2025 will significantly raise public awareness and accelerate ongoing efforts in youth empowerment,” Prof Yunus said, extending his best wishes for the success of all related initiatives.​
 

My AI guilt
by Obaidul Hamid 19 July, 2025, 00:00

1752886845079.png


THERE are certain things in our fast-changing world that occasionally give me a feeling of guilt, if not sinfulness. Artificial intelligence is one of them. This apologia is an account of this uncanny sense of wrongdoing, without seeking redemption.

I recently attended an international conference on English language education in a well-known Asian city. The conference theme linked the concepts of care, innovation, and sustainability to English teaching, learning and assessment. One highlight of the academic festival was the opening keynote address by a Korean-origin and Stanford-affiliated academic giant who is currently based in China. He is a globe-trotter, giving keynotes and conducting high-tech education research in different parts of the world.

As expected, his keynote was about reshaping the future of English language education with innovation and purpose. It was a superb address which was duly applauded by the audience. Listening to him, I developed a rough sense of where education technology in general and AI in particular currently stand and how technology can revolutionise education and empower every student as a potential innovator of start-up businesses. However, as a tech-unsavvy person, I found the content somewhat overwhelming. It is this occasion that aroused the most recent techno-guilt.

I had such a guilty conscience previously when I attended workshops and/or webinars on technology in other places. I am a slow learner. I am even slower when it comes to technology, data, or science. I usually make sense of the world by words and narratives. Therefore, I often fail to catch up with the pace of talks about the what and how of new technologies, etc.

The keynoter told the audience that the average time f ame for technological innovation hitting the market was now about two weeks. I don’t think I will ever be able to stay updated. I couldn’t have done it even if the time frame had been two years. Like many of my colleagues, I am not left with much spare time to embark on AI hunting after meeting the growing temporal demands of work, family and social life. Academics experiencing ‘time-sickness’ or their battle with time in a neoliberal environment is an apt point made by another speaker at the conference.

Such academic events bring optimal affordances for academics and researchers to master the potential of technologies for their work. However, while I try to engage in learning, I often end up digging my own soul more than internalising the technology know-how shared. And I discover the guilt sitting right in my heart, mind or soul, whatever you call it.

The tech exposure which otherwise provides much-desired learning opportunities makes me feel that I was dumber as an academic, not keeping in touch with what was happening in the world of technology. Specifically, if any presenter asks questions such as ‘Have you used this or that tool or platform?’ they take me back to my primary school days. I feel like I didn’t do my homework assigned by the teacher, or I deserved to be chastised for my carelessness. Certainly, those questions are ways of making learning interactive, but they come to me as reminders of what I should have done but didn’t.

Occasionally, I am also challenged by a critic within myself who reminds me that I was probably wasting my time on less important things. How could I not take technology seriously when it was going to make students smarter than teachers? How could I ignore that I might soon become obsolete, as AI would probably do a better job than me for my students?

The keynote also divided education curriculum into two types: before GenAI and after GenAI. The implied message was an educator couldn’t do a BG curriculum in an AI era. This also led to an inescapable guilt: was I teaching the wrong stuff in my class because it was a BG not an AG curriculum?

My guilt also emits from ideologies of technology that are constructed, reproduced and disseminated for public consumption. These are about the unmatched benefits of technology for all — students, teachers, institutions and societies. One group of beneficiaries who are rarely mentioned are tech capitalists. Their capital hunger that drives tech innovations is something that remains hidden or unmentioned. As I try to discipline myself with these discourses, I extract this principal message: that life can’t be lived or imagined without technology. This then sprouts the guilt in my fossilised learning and teaching self. I accuse myself of thinking I can’t do what others do so easily. I am falling far behind in learning and harnessing technology for my students. They are conquering time and technology while I am wallowing in inaction and laziness. I probably only daydream, without following the demonstrated path to a bright present and a brighter future.

Discourses that normalise technology or innovation often divide people into two groups. One group are enthusiastic and entrepreneurial who learn anytime, anywhere, and keep their learning antenna open to any input or stimuli. The other is regressive and lethargic, wishing the world never changed and things remained the same forever. I will probably be thrown into the second group based on emerging standards and discourses.

I also feel constantly bombarded by new philosophies about life, freedom, and happiness. How technology will empower me, bring the whole world to the touch of my fingers and allow me to be and become whatever or whoever I wished. However, my indolent heart doesn’t want to be stirred. It doesn’t feel incentivised by such promises.

That doesn’t necessarily mean I am a tech-hater. Nor do I think I suffer from technophobia. I do believe in the power of technology and the things that technology can (and can’t) do for us. However, the problem is I can’t force myself to love what I don’t love. My affection for technology stays in the neighbourhood of liking at best; it doesn’t travel farther to the stage of any other stronger positive emotion. I can’t romanticise technology or pretend love for it when I have only an instrumentalist dependence on it. I know there are millions who can go hours with smartphones in their hands. However, this small machine often bores me down. It’s an important tool in life, but I can also live without it, if need be. I know we lived without AI not long ago. We did live without mobile phones just over two decades ago. Many people lived a good life when there were no planes, trains, or automobiles. People also lived meaningful lives even without formal literacy, which to me is the mother of all innovations.

Technology arouses guilt in me because it often tells me, even without saying, that I must embrace it. If I don’t, I deserve to be left behind and forgotten. Its hegemony allows it to exclude someone like me from the new world. This exclusion is not the fault of technology or of the capitalist urge behind it. It’s my own problem, as I don’t seem to move ahead. My techno-guilt may have no redemption.

I don’t assume the reader is a member of the minority tribe that feels subdued by technology. But if you indeed are, you (and I) can’t give up. We can live with the self-focused guilt so long as it doesn’t turn into an atrocity.

Obaidul Hamid is an associate professor at the University of Queensland in Australia. He researches language, education, and society in the developing world. He is a co-editor of Current Issues in Language Planning.​
 

How AI can help in disasters like the Milestone crash

1753407508152.png

VISUAL: MONOROM POLOK

I was born and raised in Dhaka, where I spent the first 25 years of my life before moving to the US in 2015. Today, I work as a research assistant professor at the University of Oklahoma and serve as part of a research group, whose mission is to develop safer, more efficient aircraft through AI, digital twin technologies, and predictive maintenance. We work on reducing production costs, improving airworthiness, and minimising material waste.

When I heard about the FT-7 BGI jet crash into Milestone School and College in Dhaka's Uttara on July 21, 2025, which killed at least 32 people—including the pilot, teachers, and many young students—and injured over 150, I wasn't shocked. I was devastated, but not surprised.

Dhaka is one of the most overcrowded cities in the world, with a population density of about 23,234 citizens per square kilometre. Despite being the capital, it lacks proper zoning plans for sensitive infrastructure such as military air bases. The Kurmitola air base, from where the aircraft took off, is surrounded by densely packed residential areas and schools. When a malfunction occurred, there was simply nowhere safe for the pilot to go.

Even in the US, with its advanced infrastructure, similar disasters have occurred. Earlier this year, on January 29, a mid-air collision over the Potomac River near Washington, DC, between an American Airlines regional jet and a US Army Black Hawk helicopter tragically claimed all 67 lives aboard both aircraft. But such events are typically followed by systemic changes—strengthening flight corridors, improving air traffic control, and implementing AI for predictive monitoring.

In FY2024-25, the proposed defense budget in Bangladesh was Tk 42,010 crore. Over the years, it has purchased various aircraft; for instance, the FT-7 BGI jet involved in today's crash is an upgraded variant of the Soviet-era MiG-21, with Bangladesh acquiring 16 such aircraft from China between 2011 and 2013. But buying hardware alone doesn't ensure safety. Advanced systems require equally advanced maintenance, simulation, infrastructure, and disaster preparedness—all of which Bangladesh often struggles to adequately provide.

Even superpowers like the US and China are shifting toward AI and digital twin solutions to reduce costs. If they're investing in smarter systems to minimise spending and maximise safety, it's clear that Bangladesh must do the same—not as an option, but as a necessity.

Too often we focus solely on avoiding disaster. But we must also ask: what happens if a crash does occur?

Take the example of India's Air India Flight 171 crash on June 12, 2025. This Boeing 787 Dreamliner went down just 32 seconds after takeoff from Ahmedabad, en route to London, killing 260 of the 261 people on board and others on the ground when it impacted a medical hostel complex. While investigations are ongoing, initial findings point to mid-air engine failure. If onboard systems were designed to delay fire spread by even 2-3 minutes after such an event, many lives might have been saved. Today, technologies exist—such as advanced flame-retardant cabin materials and automatic fire suppression systems in critical areas—but they're not always standard or accessible in countries like Bangladesh.

In Dhaka's crash, victims burned to death inside the school. There was no immediate disaster management response. Metro Rail was used to transport victims four hours after the incident—with only one coach allocated. Ambulances were stuck in traffic. Nearby rickshaws and private cars often refused to help. In that critical golden hour, most of the children who died could potentially have been saved.

Imagine a future where an AI system immediately classifies an incident's severity—from Level 1 (minor) to Level 3 (severe)—based on casualty estimates, proximity to medical facilities, traffic congestion, and emergency response availability. Based on this, the system could: i) notify hospitals, fire stations, and police within seconds; ii) activate metro rail or ferry systems to serve as emergency transport; iii) block traffic routes in real time, just as roads are cleared for VIPs today; and iv) command nearby private vehicles to assist in transport—with government compensation issued later via digital tracking.

Even rickshaws and CNGs could be part of a national emergency fleet with built-in GPS coordination. Government investment in such AI-based disaster protocols could dramatically reduce fatalities—not just in plane crashes, but in fires, industrial accidents, and floods.

Globally, military flight tests occur in remote, spacious zones. In the US, bases like Edwards Air Force Base cover thousands of acres for flight testing. China frequently conducts air drills in its vast airspace, including coastal zones like Shandong. Even India tests aircraft over remote desert or mountainous zones.

Bangladesh, however, uses the skies above Dhaka—one of the most densely populated cities in the world—for its flight training. This must change. Future bases should be relocated to less populated regions, with enforced flight corridors and emergency landing zones. It's not just about modern aircraft; it's about responsible geography.

Bangladesh is not resource-rich, but it is youth-rich, with 33 percent of its population aged between 18 and 35, many of whom are enrolled in STEM programmes. Rather than continuing to import costly and difficult-to-maintain foreign equipment, the country should prioritise investing in AI education and workforce development. Building national digital twin platforms for aviation, manufacturing, and infrastructure can lay the foundation for smart, adaptive systems. Local industries should be encouraged to produce essential components and intelligent technologies, which would simultaneously create jobs in fields such as predictive analytics, aerospace simulation, and disaster modelling.

To support this ecosystem, a national disaster coordination network powered by AI is essential. Key policy recommendations include adopting digital twin technology for military aircraft maintenance, mandating fire-resistant interiors and delay-suppression systems in aircraft and public buildings, and establishing AI-based disaster response systems with real-time communication and routing capabilities. Moreover, redesigning flight zones to avoid dense urban centres, incentivising the use of private and public vehicles during emergencies, and using AI to model urban vulnerabilities and guide dynamic zoning policies would collectively strengthen national resilience.

The crash in Dhaka was tragic. But if it becomes just another event on a long list of avoidable disasters, then we are complicit in the next one. As a Bangladeshi-born researcher working to make aircraft safer, I believe the answer lies not in outrage—but in AI-driven transformation.

Md Manjurul Ahsan is research assistant professor at the University of Oklahoma.​
 

AI or human: Who wrote the article and does it affect credibility?

Shishir Moral Dhaka
Published: 02 Aug 2025, 17: 45

1754179615571.png

Deutsche Welle

Whether if it’s a press release or a corporate statement—how credible it appears to readers depends on who they believe wrote it. If they are told a human wrote it, readers tend to find it much more trustworthy. But if they are told it was written by artificial intelligence (AI), their trust significantly decreases.

This is the finding of a study conducted by the University of Kansas in the US. The research report was recently published in the international journal ‘Corporate Communications: An International Journal’.

The study was conducted by Associate Professor Cameron Piercy, PhD researcher Aiman Alhammad, and Assistant Professor Christopher Etheridge of the University of Kansas’ Department of Communication Studies.

The aim of the research was to answer an important question, does knowing whether a piece of writing was produced by a human or AI change how readers perceive it?

AI is increasingly becoming a part of daily life, and people are constantly exploring new ways to use it. These uses can have both positive and negative impacts. Often, the use of AI is not disclosed.

The idea for this research came from a communication class at the University of Kansas, where students explored whether they could tell the difference between writing produced by AI and by humans.

Co-researcher Cameron Piercy explained, “Even if readers can’t distinguish between human and AI writing, we wanted to know whether their reaction changes if they’re told who the author is.”

Research methodology

The study used a multi-method approach. Participants were presented with a fictional corporate crisis scenario: some customers became ill after consuming products from a company named ‘Chunky Chocolate Company’. The cause was deliberate misdoings by some employees.

Participants were then given three types of press releases to read: informative, empathetic, and apologetic. In each case, they were told whether the message was written by a human or by AI.

Human-writings more trusted

The results showed that participants who believed a human wrote the press release found the message far more credible and effective. But when told the message was generated by AI, readers tended to respond more neutrally or with suspicion.

However, the researchers found little difference in readers’ reactions based on the type of message—whether it was informative, apologetic, or empathetic.

Accountability lies with humans

The researchers noted that when the identity of the writer is not disclosed in a corporate statement, during a crisis or during other times, readers instinctively question who is responsible.

Co-researcher Christopher Etheridge said, “If you're using AI in writing, you must maintain transparency, take responsibility for any errors, and be prepared for the readers’ reactions.”

The researchers concluded that even if AI is used during a crisis, it cannot replace human accountability, editorial judgement or transparency.​
 

Paradox of AI’s humanity
by Obaidul Hamid 02 August, 2025, 00:00

1754182435316.png


A FEW days ago, I tried to access DeepSeek on my mobile phone. I was taken to a verification page, which wanted to make sure I was a human, not a robot or a mindless automaton. This prompted some thoughts about AI, humanity, and their relationships. I had to prove that I was human to access the large language model originating from China. Does that mean that AI (or a robot) is not allowed to access AI? What would be wrong if an AI chatbot or another technological entity accessed an AI platform such as DeepSeek?

I am not sure if the question of AI’s (non)human credential is a resolved matter. Is it possible to admit AI into the homo sapiens club? Or should we treat it just as a machine? Can it fall somewhere in between? I am not thinking of humanoid, which is probably not relevant.

I thought I better ask AI itself how it would like to be treated—as a machine, a human, or a hybrid, which we might call machian.

I opened Microsoft Copilot on my device and wrote this prompt: ‘Can I ask a silly question?’ Its reply was human-like with a smiley at the end: ‘There’s no such thing as a silly question here — ask away! I’m all ears.’ You see, it’s very conversational, friendly, and ever willing to listen and help! Such a reply would be given by an educated and enlightened human speaking very good English.

Algorithmically encouraged, I next asked Copilot, ‘Should we treat you as human or machine or something/someone else?’ This is what it said in response:

‘That’s a great question—and not silly at all!’

‘You should treat me as a machine, specifically an AI assistant designed to help with information, tasks, and conversation. I’m not human, and I don’t have feelings, consciousness, or experiences. But I’m built to understand and respond in a way that feels natural, friendly, and helpful—like chatting with a knowledgeable companion.’

It continued: ‘Some people like to think of me as a tool, like a super-smart calculator or search engine with personality. Others treat me more like a collaborator or digital coworker. It really depends on what works best for you!’

The answer is detailed and helpful, but it’s also equivocal. AI is a machine, but it has ‘personality,’ which I thought only humans could have. It’s a tool, but, at the same time, it can be a collaborator or coworker. As such, will it demand fairness and respectful treatment, as demanded by human colleagues? Can we have social relationships with AI and greet when we meet, for example?

I read somewhere that we shouldn’t say ‘Thank you’ to a large language model (LLM) because this is a waste of words and time. We don’t need to do this to a machine. However, how can we not say ‘Thank you’ when someone is so helpful — human or machine?

I repeated my inquiry with ChatGPT, the most popular LLM. And its first response also came with a smiley: ‘Of course! Silly questions are often the most fun — ask away.’

When I used the next prompt (Should we treat you as human or machine or something/someone else?), this is what I had: ‘That’s a great question — not silly at all, actually.’

So, ChatGPT followed the same conversational norms as Copilot. Its substantive response was this:

‘You should treat me as a machine — more specifically, an AI developed by OpenAI. I’m not human, I don’t have feelings, consciousness, or personal experiences. But I am designed to understand and respond to human language in a helpful, respectful, and conversational way.’

ChatGPT claims no humanity, but it can behave like a human. This is how it’s created, it says. But there is still some confusion. It further clarified:

‘If thinking of me like a really smart tool or assistant helps, that’s perfect. You can also think of me as a kind of ‘thinking interface’ — I’m here to help you think things through, create, solve problems, or just chat.’

I can understand the ‘tool’ and ‘assistant’ metaphors, but its capacity to think and chat kept me hooked. ChatGPT’s conclusion is, ‘So: not a human, not quite a traditional machine — but definitely not someone with a mind of their own.’

This seems to be less equivocal. The machine identity is emphasised; it made no claims of personality traits such as human feelings, consciousness, or experiences. However, it’s also not a machine in a traditional sense.

My final inquiry participant was DeepSeek, which had the following response to my first prompt, also given with an emoji: ‘Of course! Silly questions are often the most fun—and sometimes the most insightful. Ask away, and I’ll do my best to answer with enthusiasm (and maybe even a little humour).’

When I asked the main question, it said, ‘Great question — and not silly at all! You can think of me as a very advanced tool with a conversational interface.’

DeepSeek projected a hybrid identity — a tool that can also converse. It also elaborated the answer using two subheadings:

‘Machine: At my core, I’m an AI — a language model trained on vast amounts of text. I don’t have consciousness, feelings, or human experiences. I process input and generate responses algorithmically.’

This is a clear denial of human properties. However, it is also ‘human-like’ and can talk, joke, show sympathy, and give the feeling of being natural. This can be noted from the next subheading:

‘Human-like: That said, I’m designed to simulate human conversation, so I can chat, joke, and empathise in a way that feels natural. But it’s all pattern recognition, not a genuine understanding.’

DeepSeek also concluded that we could treat it the way we feel comfortable.

Regardless of what these popular LLMs say, our treatment of AI seems to be unclear. As an academic, I can provide examples from academic publishing.

Some journals have already given co-authorship to AI in papers that use AI to a degree that probably meets the authorship criteria. One thing is, of course, clear: AI can only be a co-author; it can’t be the first or corresponding author.

Giving authorship to AI can be an instance of academic fairness. This is important, as we know of cases of authorship fraud or unfair distribution of authorships, particularly involving supervisors and research students. Sometimes institutional power determines authorship, which is unacceptable. For example, during my last trip to a South Asian country, a colleague shared a story about an executive of a university demanding a researcher from another institution to write a journal paper. The researcher ghost-wrote the paper, and thus the executive became the sole author of the work without writing a single word. That was not the end of the surprise. The executive also received the honorarium for this publication, in which they had zero input. The actual author received neither money nor recognition. Hopefully, AI authorship questions will prompt us to be fair to human authors.

However, some other journals have refused to accept AI authorship on valid grounds. All authors are held responsible for their contribution to any publication. If anything goes wrong with a paper, can AI be held accountable for it? The answer is no. We cannot penalise, chastise, or punish AI for any misconduct or incorrect information.

The human question of AI is also implicated when we talk about ethics, which is probably an exclusive terrain for humans. Ethics may not apply to non- or post-human entities. Nevertheless, ethics has been attributed to AI in various ways. We often hear how AI can ensure fairness, how it can address inequality and disadvantage, and how it can create a level playing field in education and other domains. AI bias has also drawn our attention in terms of language, knowledge, and data, sharing human prejudice.

I recall a relevant point from a recent conference that was held in an Asian city. One of the invited speakers shared her research involving students working with AI in groups. The presenter highlighted the question of students being unfair to AI. She reported that a three-member group in which AI was one had an equal distribution of workload. However, when giving credit for input, the student members agreed to give only about 10 per cent to their AI collaborator. This was an instance of being unfair to AI.

Our conceptualisation of AI probably needs more — and perhaps radical — thinking. Calling it a machine denies its human-like character. Calling it human, on the other hand, may be ridiculous because it doesn’t have human flesh, blood, consciousness, or feelings. At the same time, however, we can’t deny that in infospace it can think and work better and faster than humans. Human intelligence created AI, which can now outsmart humans. But it still needs a human to be able to function like — or better than — human.

As AI becomes more prevalent in our lives, we need to work out its nature, identity, and human connection. Its role in human endeavours needs to be described carefully, practically, and ethically.

Obaidul Hamid is an associate professor at the University of Queensland in Australia. He researches language, education, and society in the developing world. He is a co-editor of Current Issues in Language Planning.​
 

OpenAI unveils GPT-5 with major upgrades in reasoning, writing and safety

1754706379482.png

OpenAI has announced the release of GPT-5, its most advanced artificial intelligence model to date, promising substantial gains in reasoning, accuracy and real-world utility.
GPT-5. Image: OpenAI

OpenAI has announced the release of GPT-5, its most advanced artificial intelligence model to date, promising substantial gains in reasoning, accuracy and real-world utility. The system becomes the new default model for ChatGPT users, replacing earlier versions including GPT-4o and OpenAI o3.

The company describes GPT-5 as a "unified system" comprising a smart base model, a more powerful reasoning mode dubbed "GPT-5 thinking," and a real-time router that determines which component to use based on the task's complexity or user intent. This enables the system to shift between fast replies and deeper analysis automatically, or when users prompt it to "think hard" about a query.

Broader capabilities and improved performance

According to OpenAI, GPT-5 outperforms previous iterations in core areas such as writing, coding, health queries, and multimodal reasoning. It is also claimed to deliver faster responses with fewer factual errors and significantly reduced hallucination rates—particularly when using its higher reasoning mode.

In benchmark testing, GPT-5 demonstrated top-tier results:

94.6% on the AIME 2025 competition-level maths test

88% accuracy on Aider Polyglot for multi-language code editing

46.2% on the challenging HealthBench Hard evaluation

GPT-5 reportedly excels in building responsive apps and websites from single prompts when it comes to coding. OpenAI shared examples where the model created full games and interactive experiences, demonstrating design sensibility in layout and typography as well as functionality.

In creative tasks, GPT-5 is said to offer more nuanced writing support, including improvements in poetry, structural editing, and literary expression. When prompted to write an emotional poem, the model's version was noted for its vivid imagery and stronger emotional resolution compared to previous outputs.

A more helpful and reliable assistant

The model introduces several enhancements aimed at making interactions safer and more useful. These include:

Reduced hallucinations: GPT-5's factual error rate is said to be 45% lower than GPT-4o, and up to 80% lower than OpenAI o3 when using the reasoning mode.

Less sycophancy: OpenAI reports that GPT-5 is less likely to flatter or agree uncritically, addressing a longstanding issue with language models.

Honest limitations: In tests involving impossible or underspecified tasks, GPT-5 more reliably recognises its own limits rather than providing misleading or overconfident responses.

In health contexts, GPT-5 is described as a better "thought partner", adapting to the user's level of knowledge and local context to provide safer and more personalised information. OpenAI emphasised, however, that GPT-5 does not replace medical professionals.

Pro version and customisation

The company also introduced GPT-5 Pro, a premium variant with extended reasoning capabilities designed for the most demanding tasks. Early testing shows it made 22% fewer major errors and was preferred over GPT-5 standard in nearly 68% of expert evaluations.

In addition, OpenAI is piloting four new personality presets—Cynic, Robot, Listener, and Nerd—designed to give users more control over the AI's tone and interaction style.

Rollout and access

GPT-5 begins rolling out today to ChatGPT Plus, Pro, and Team users, with Enterprise and education accounts gaining access next week. Free-tier users will also receive limited access, though usage limits apply. Once these are reached, responses will be handled by a lighter GPT-5 mini model.

The system was trained on Microsoft Azure supercomputers and incorporates multiple layers of safety protocols, particularly in sensitive domains like biology. OpenAI says it has taken a precautionary approach to capabilities in these areas, integrating monitoring systems and extensive red-teaming.​
 

AI in next-gen policing
by Md Motiar Rahman 13 August, 2025, 00:00

1755048228424.png


CRIME detection and law enforcement have undergone significant evolution with the advancement of technology. In many developed countries, artificial intelligence has revolutionised policing by improving crime prevention, investigation and response mechanisms. AI-based tools such as predictive analytics, facial recognition, automated surveillance and natural language processing have enhanced the efficiency of law enforcement agencies.

For Bangladesh police, where crime rates fluctuate due to social, economic and political factors, the integration of AI could offer faster, more accurate and data-driven crime detection. It is imperative to explore how AI can be utilised in crime detection in Bangladesh, the potential benefits, the challenges and a future road map for AI-driven law enforcement.

Predictive policing

AI-POWERED predictive policing and crime mapping tools can significantly enhance the effectiveness of the Bangladesh police by analysing vast amounts of historical crime data to forecast where and when crimes are likely to occur. By generating heat maps and identifying emerging crime trends, AI can support strategic deployment of police resources in high-risk urban areas such as Dhaka, Chattogram, Gazipur and Narayanganj. Machine learning algorithms, for instance, can detect patterns in burglary, theft and drug trafficking based on past incidents, enabling law enforcement to act preemptively. Although Bangladesh has not yet formally adopted advanced systems like PredPol — used by Los Angeles to identify crime-prone areas — pilot initiatives, such as the Dhaka Metropolitan Police’s Crime Data Management System, provide a promising foundation for integrating AI-driven approaches into policing.

Facial recognition, biometric identification

AI-POWERED facial recognition and biometric identification technologies offer significant potential to enhance policing in Bangladesh by enabling the identification of suspects through CCTV footage, national ID databases and even social media images. These tools can assist in tracking fugitives and missing persons, preventing identity fraud and strengthening airport and border security — particularly in sensitive areas like Teknaf, Benapole and Tamabil, where human trafficking and illegal migration are persistent threats. By matching images captured from surveillance systems or mobile devices with official databases, law enforcement agencies can respond more swiftly and accurately. However, the implementation of such technologies must be approached with caution, as the risk of misuse, wrongful arrests and inadequate oversight could erode public trust and raise serious concerns about privacy and accountability.

Recently, the Police Bureau of Investigation has revealed that the recent fire at the BIAM Foundation building in Dhaka was a deliberate act of arson intended to destroy sensitive documents of the BCS (Administration) Welfare Multipurpose Cooperative Society. Initially believed to be caused by a faulty air conditioner, the case took a dramatic turn when PBI, using AI-driven video analysis, identified Ashraful Islam disabling the CCTV cameras before the fire and later arrested him in Kurigram. His confession led to the arrest of Md Zahidul Islam, an administrative officer at BIAM, who allegedly masterminded the plot with Ashraful, office assistant Abdul Malek, and driver Md Faruk, with the group promised Tk 10–12 lakh to destroy the documents. Tragically, an explosion during the act killed Malek instantly and fatally injured Faruk. This case marks a significant breakthrough in Bangladesh’s law enforcement, demonstrating the transformative role of AI in uncovering complex crimes and shifting the investigation’s focus to uncovering the nature of the destroyed documents and identifying any additional conspirators.

Automated surveillance, anomaly detection

AUTOMATED surveillance and AI-powered anomaly detection systems are increasingly being integrated into traffic cameras, ATM surveillance networks, and broader security infrastructures to identify suspicious activities without human intervention. These advanced video analytics tools can detect abnormal behaviour in public spaces, such as unauthorised gatherings, potential riots or terrorist threats. Drones equipped with AI-powered cameras are also being deployed for crowd monitoring and emergency response, providing real-time situational awareness. In major transit points like Kamalapur Railway Station, Hazrat Shahjalal International Airport, and critical traffic intersections, AI-driven surveillance can recognise anomalies such as sudden crowd surges, unusual movements or unattended objects, enabling quicker and more targeted security responses. In the 2026 national election, the use of drones could offer potential advantages for monitoring unlawful gatherings and group violence.

Cybercrime investigation

WITH the rise of digital threats, AI is becoming an essential tool in cybercrime investigation by enabling law enforcement to detect patterns of fraud, hacking and online harassment more efficiently. AI systems can analyse activities on social media and the dark web to uncover cyber-criminal networks, identify and prevent financial fraud, digital identity theft, and phishing scams, and assist forensic teams in decrypting compromised systems or examining digital evidence. However, as AI empowers police efforts, it simultaneously offers new tools for criminals. In Bangladesh, emerging AI-enabled crimes pose significant challenges: deepfake technology can produce fabricated videos or audio impersonating politicians, police officers or business leaders, potentially causing unrest, especially during sensitive periods like elections. Similarly, AI-driven phishing and social engineering — particularly in Bangla — are being used to manipulate individuals into disclosing confidential information. Moreover, AI is increasingly being used in orchestrating automated financial fraud, including investment scams and illicit activities in mobile banking and cryptocurrency, reflecting a growing concern for Bangladesh’s digital security landscape.

In recent years, social media platforms such as Facebook, YouTube, TikTok and other digital outlets have increasingly become arenas where individuals — often under the guise of free expression or anonymous profiles — launch malicious attacks against dignitaries, political figures, public servants and government officials. These attacks frequently take the form of videos, livestreams, and posts that employ vulgar, obscene and highly offensive language. The content often crosses the boundary of criticism into personal vilification, character assassination and misinformation. More disturbingly, there is a growing trend of digital blackmail, where private information, photos or fabricated content is used as leverage to extort money, favours, or silence from victims. Such actions not only harm the individuals targeted but also undermine the dignity of public institutions and the rule of law.

The lack of digital literacy, weak enforcement of cyber laws and the absence of a clear framework for accountability on online platforms have further contributed to this toxic environment. If left unchecked, this trend poses a serious threat to public trust, social harmony and the integrity of governance. The growing menace of online abuse, defamation and blackmail targeting dignitaries and public officials in Bangladesh can be addressed through a balanced and multi-dimensional strategy focusing on the resourcing of cybercrime investigation units, which is essential to identify and prosecute offenders without infringing on freedom of expression.

Recently, a fake video statement was circulated on social media platforms using a still photograph of the inspector general of police of Bangladesh combined with an artificially generated voice. In this video, advanced AI technology was used to clone the IGP’s voice and deliver a fabricated and misleading message, giving the false impression that it was an official communication. This deliberate misuse of AI constitutes a serious cyber offence under the existing laws of Bangladesh, including those related to digital fraud, identity misuse and incitement. Such malicious activities not only aim to mislead the public but also create unnecessary sensation, confusion and distrust. They can incite public resentment and potentially threaten national security and public order.

Potential benefits of AI

INTEGRATING artificial intelligence into crime detection and law enforcement holds significant promise for the Bangladesh police, offering a transformative approach to improving investigative accuracy, operational efficiency, public safety and citizen engagement.

One of the most immediate advantages of AI is its ability to speed up investigations while enhancing accuracy. Traditionally time-consuming processes such as analysing forensic evidence, reviewing CCTV footage or identifying suspects can now be completed rapidly through AI-powered tools. These systems can sift through massive datasets, detect patterns and establish links between incidents, thereby accelerating case resolution and improving the quality of investigations.

AI also enhances the overall efficiency of law enforcement operations. By analysing real-time data, AI can optimise patrol routes, traffic flows and emergency response coordination. This data-driven approach ensures that resources are deployed where they are most needed, reducing waste and lowering operational costs while maximising impact on the ground.

In the realm of counterterrorism, AI proves indispensable by enhancing intelligence gathering and threat analysis. It can monitor social media for signs of radicalisation, track suspicious financial transactions, and analyse call records to uncover extremist networks. This strengthens the capacity of the Bangladesh police to detect and disrupt terrorist plots before they materialise.

Lastly, AI can play a constructive role in fostering better police-community relations. Tools such as AI-powered chatbots and intelligent complaint management systems offer citizens a more efficient and transparent way to communicate with law enforcement. By handling non-emergency queries, providing status updates and reducing wait times at police stations, these systems can build public trust and promote greater accountability within the police force.

To be continued............
 
Challenges and limitations

WHILE the integration of artificial intelligence into crime detection holds great potential for the Bangladesh police, its implementation is not without significant challenges and limitations.

One of the foremost challenges is the high cost of implementing AI systems. Establishing a robust AI-driven infrastructure requires not only cutting-edge technology and secure digital platforms but also a team of skilled professionals to operate and maintain these systems. The government must be prepared to invest substantial resources in developing AI-powered crime databases, smart surveillance networks and comprehensive training programmes for police officers to handle AI-based investigations efficiently.

Ethical and legal concerns also present a major obstacle to the widespread adoption of AI in policing. The deployment of surveillance tools and automated data analysis raises important questions related to individual privacy, data protection and civil liberties. In the absence of well-defined legal frameworks, there is a risk of unauthorised data collection and misuse of personal information. Additionally, AI algorithms used in predictive policing may inadvertently reinforce racial, ethnic or socioeconomic biases, leading to discrimination and mistrust within communities.

A further limitation is the current lack of AI expertise within law enforcement agencies. The Bangladesh police face a shortage of officers trained in AI, data science and cyber security.

Moreover, cyber security threats and concerns about data integrity cannot be overlooked. AI systems depend heavily on the secure collection, storage and processing of vast amounts of data. If police databases are compromised through hacking or data manipulation, it could not only derail sensitive investigations but also pose a serious threat to national security. Ensuring robust cyber security protocols and digital safeguards will be vital to protect AI-enabled systems from malicious interference.

To effectively harness artificial intelligence in crime detection and law enforcement, the police must adopt a structured, phased roadmap grounded in innovation, ethics and sustainability. This process should begin with a comprehensive, needs-based assessment tailored to Bangladesh’s crime patterns. Strategic priorities include launching pilot projects in high-crime and urban areas such as old Dhaka, Keraniganj and Chattogram. These pilots will help test predictive analytics, facial recognition and surveillance systems in real-world settings and generate evidence-based insights for scaling.

Simultaneously, AI-specific training should be embedded into existing police education frameworks.

Developing a robust AI infrastructure is foundational. Establishing AI-powered Crime Analysis Centres in major cities like Dhaka, Chattogram, and Sylhet will enable real-time crime monitoring, faster data-driven investigations and enhanced operational decision-making. These centres should be closely integrated with the National Crime Database for seamless data sharing and analytics.

Collaborations with domestic universities, international research institutions and tech firms will be key for technology transfer and capacity development. To ensure responsible and rights-based AI deployment, the government must institute strong regulatory frameworks and enforceable data protection laws. This includes the development of ethical guidelines, standard operating procedures, and human rights impact assessments specific to AI in policing.

Building public trust is central to the success of AI in policing. Civic participation — through engagement with civil society, legal experts, technologists and community leaders — is vital for developing a socially inclusive and accountable AI policy framework. Transparent communication, digital literacy campaigns, and community outreach will help clarify AI’s role, dispel fears and promote informed public dialogue.

Moreover, all AI deployments must be evaluated for algorithmic bias, disproportionate impact and community acceptance. Regular monitoring and evaluation should be institutionalised to ensure continual improvement.

Artificial intelligence holds the transformative potential to revolutionise crime detection and policing in Bangladesh by enhancing investigative capabilities, preventing crime and strengthening public safety. With strategic investments in technology, capacity-building, and legal safeguards, AI can support the development of a smarter, more efficient and proactive law enforcement system. However, the integration of AI must be approached with caution — guided by ethical principles, robust regulation and a strong commitment to protecting individual privacy and rights. AI is not inherently good or bad; it is a powerful tool whose impact depends entirely on how, where and for whom it is used. In the vision of a progressive, inclusive society, it must be harnessed as an instrument of empowerment, innovation and equity — ensuring that technological progress translates into real safety and justice for all citizens.

Md Motiar Rahman is a retired deputy inspector general of police.​
 

Latest Posts

Latest Posts

Back
PKDefense - Recommended Toggle
⬆️ Top
Read Watch Wars