Donate ☕
201 Military Defense Forums
[🇧🇩] - Artificial Intelligence-----It's challenges and Prospects in Bangladesh | Page 12 | PKDefense
Home Login Forums Wars Watch Videos
Serious discussion on defense, geopolitics, and global security.

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

Reply (Scroll)
Press space to scroll through posts
G Bangladesh Defense
[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
68
3K
More threads by Saif

Govt drafts AI policy to tap its potential, tackle concerns


1720572584409.webp



The government has formulated the draft National AI policy as it looks to make the best use of artificial intelligence to raise productivity and spur economic growth while dealing with the concerns presented by the technology spreading at a breakneck pace.

"This policy seeks to harness the benefits of AI while mitigating its risks, fostering innovation, and ensuring that AI technologies serve the best interests of the citizens and the nation as a whole," the draft said.

The Information and Communication Technology Division prepared the National AI Policy and published it recently.

The policy is expected to address the legal, ethical, and societal implications of AI effectively and efficiently.

It has placed a significant emphasis on public awareness and education, enlightening citizens about AI and its far-reaching benefits.

The objectives of the policy are to accelerate equitable economic growth and productivity through AI-driven optimisation, forecasting, and data-driven decision-making, and ensure efficiency and accessibility of public services through AI-enabled personalisation.

The draft comes as countries around the world race to prepare to deal with the changes being brought about by the fast-evolving technology.

The International Monetary Fund (IMF) has published its new AI Preparedness Index Dashboard for 174 economies, based on their readiness in four areas: digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation.

It showed Bangladesh's score stands at 0.38 compared to 0.49 of India, 0.37 of Pakistan, 0.35 of Nepal, 0.44 of Sri Lanka, 0.77 of the US, 0.64 of China, and 0.73 of Australia. Developed countries have a score of at least 0.7.

In Bangladesh, the government plans to adopt data-driven policy-making in every sector through AI-supported analytics and insights and nurture a skilled workforce that can utilise and build AI technologies.

It wants to embed AI in education and skills development so that the largely young population can meet the demands of the future.

The draft said the country will also foster a culture of AI research and innovation through public and private funding. It will ensure development and adhere to a robust ethical framework by establishing regulatory measures that uphold human rights in AI development and deployment.

The ICT Division, in collaboration with relevant ministries, industry, academia, and civil society, will take necessary steps to establish the institutional framework for the policy implementation, officials said.

It will set up an independent National Artificial Intelligence Center of Excellence (NAICE).

The NAICE will be responsible for coordination and monitoring of AI initiatives using key performance indicators and evaluation of AI initiatives' social, economic, and environmental impacts, guiding adjustments for maximum benefits and risk mitigation.

It will facilitate collaboration and knowledge-sharing among stakeholders, including government agencies, industry, academia, and civil society. It will ensure that any measures taken to regulate the technology are proportional to the risk and balanced to encourage innovation.

The government will form a high-level national AI advisory council to guide the implementation of sectoral AI initiatives.

The draft said the legal and regulatory frameworks are necessary for AI policy implementation.

The National Strategy for AI will be framed, and it will be updated every two years in accordance with the advancement of AI worldwide.

The strategy will include data retention policies, deal with the legal issues of data governance and ownership and focus on interoperability and data exchange.

According to IMF's economist Giovanni Melina, AI can increase productivity, boost economic growth, and lift incomes. However, it could also wipe out millions of jobs and widen inequality.

IMF's research has shown how AI is poised to reshape the global economy. It could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries.

But, on the brighter side, it also brings enormous potential to enhance the productivity of existing jobs for which AI can be a complementary tool and to create new jobs and even new industries.

Melina said most emerging market economies and low-income countries have smaller shares of high-skilled jobs than advanced economies, and so will likely be less affected and face fewer immediate disruptions from AI.

"At the same time, many of these countries lack the infrastructure or skilled workforces needed to harness AI's benefits, which could worsen inequality among nations."

The economist said the policy priority for emerging markets and developing economies should be to lay a strong foundation by investing in digital infrastructure and digital training for workers.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond
Challenges and limitations

WHILE the integration of artificial intelligence into crime detection holds great potential for the Bangladesh police, its implementation is not without significant challenges and limitations.

One of the foremost challenges is the high cost of implementing AI systems. Establishing a robust AI-driven infrastructure requires not only cutting-edge technology and secure digital platforms but also a team of skilled professionals to operate and maintain these systems. The government must be prepared to invest substantial resources in developing AI-powered crime databases, smart surveillance networks and comprehensive training programmes for police officers to handle AI-based investigations efficiently.

Ethical and legal concerns also present a major obstacle to the widespread adoption of AI in policing. The deployment of surveillance tools and automated data analysis raises important questions related to individual privacy, data protection and civil liberties. In the absence of well-defined legal frameworks, there is a risk of unauthorised data collection and misuse of personal information. Additionally, AI algorithms used in predictive policing may inadvertently reinforce racial, ethnic or socioeconomic biases, leading to discrimination and mistrust within communities.

A further limitation is the current lack of AI expertise within law enforcement agencies. The Bangladesh police face a shortage of officers trained in AI, data science and cyber security.

Moreover, cyber security threats and concerns about data integrity cannot be overlooked. AI systems depend heavily on the secure collection, storage and processing of vast amounts of data. If police databases are compromised through hacking or data manipulation, it could not only derail sensitive investigations but also pose a serious threat to national security. Ensuring robust cyber security protocols and digital safeguards will be vital to protect AI-enabled systems from malicious interference.

To effectively harness artificial intelligence in crime detection and law enforcement, the police must adopt a structured, phased roadmap grounded in innovation, ethics and sustainability. This process should begin with a comprehensive, needs-based assessment tailored to Bangladesh’s crime patterns. Strategic priorities include launching pilot projects in high-crime and urban areas such as old Dhaka, Keraniganj and Chattogram. These pilots will help test predictive analytics, facial recognition and surveillance systems in real-world settings and generate evidence-based insights for scaling.

Simultaneously, AI-specific training should be embedded into existing police education frameworks.

Developing a robust AI infrastructure is foundational. Establishing AI-powered Crime Analysis Centres in major cities like Dhaka, Chattogram, and Sylhet will enable real-time crime monitoring, faster data-driven investigations and enhanced operational decision-making. These centres should be closely integrated with the National Crime Database for seamless data sharing and analytics.

Collaborations with domestic universities, international research institutions and tech firms will be key for technology transfer and capacity development. To ensure responsible and rights-based AI deployment, the government must institute strong regulatory frameworks and enforceable data protection laws. This includes the development of ethical guidelines, standard operating procedures, and human rights impact assessments specific to AI in policing.

Building public trust is central to the success of AI in policing. Civic participation — through engagement with civil society, legal experts, technologists and community leaders — is vital for developing a socially inclusive and accountable AI policy framework. Transparent communication, digital literacy campaigns, and community outreach will help clarify AI’s role, dispel fears and promote informed public dialogue.

Moreover, all AI deployments must be evaluated for algorithmic bias, disproportionate impact and community acceptance. Regular monitoring and evaluation should be institutionalised to ensure continual improvement.

Artificial intelligence holds the transformative potential to revolutionise crime detection and policing in Bangladesh by enhancing investigative capabilities, preventing crime and strengthening public safety. With strategic investments in technology, capacity-building, and legal safeguards, AI can support the development of a smarter, more efficient and proactive law enforcement system. However, the integration of AI must be approached with caution — guided by ethical principles, robust regulation and a strong commitment to protecting individual privacy and rights. AI is not inherently good or bad; it is a powerful tool whose impact depends entirely on how, where and for whom it is used. In the vision of a progressive, inclusive society, it must be harnessed as an instrument of empowerment, innovation and equity — ensuring that technological progress translates into real safety and justice for all citizens.

Md Motiar Rahman is a retired deputy inspector general of police.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Exploring AI prospect

Published :
Nov 11, 2025 00:43
Updated :
Nov 11, 2025 00:43

The dichotomy between the prospect of artificial intelligence (AI) and human resources could not be sharper anywhere save Bangladesh. Given the option for going whole hog for exploring AI prospect, should the country take the chance? Or, can the country afford the luxury of promoting AI at any cost to the neglect of its huge army of unemployed population? The dilemma is quite clear: overemphasis on either the AI or unemployment of the people for its curtailment makes no sense. New technologies have always catapulted nations from one stage to the next big stage. So the question about adoption of AI is, how much is too much? That AI has come to transform means of production, supply and distribution is a reality. It is true that the country is yet to be prepared for wide application of AI. If it does not ready its infrastructure, manpower skills through educational reform and domestic digital capacity, it is sure to fall behind other nations including its neighbours.

The views thus expressed by local experts in the field of information and communication technology (ICT) corroborate the contention of the World Bank report on 'Jobs, AI, and Trade in South Asia'. One area that hardly receives attention from the policy makers as also from deliberations on development at seminars and workshops is local innovation. Happily, experts this time have made home-grown innovation a salient point. What happens usually is that technologies innovated abroad get transferred to countries poor and lacking in research and development a decade later. In the digital era, any such gap in technical and technological knowledge and skills can be disastrous. Appropriate policies, strategies and digital environment can not only bridge the skill and innovation gap but also turn the sector into the number one foreign exchange earner. There is no dearth of tech-savvies in this country. The only problem is with the right environment for them to work and give their best.

So the important point is not only to invest in education, infrastructure development and enhancement of digital capacity --- where power supply is not disrupted, internet connection and speed are undisturbed---but also develop digital skills of mid-ranking employees who can maintain connection between lower ranked ones and the top grade computer geeks. But preparing such a collaborative digital regime will call for educational reform. This cannot be done overnight. Time is crucial here. There is no scope for taking too long a time to strike a balance between the urgency and obtaining reality.

When education is considered a subject of repeated experiments and no pragmatic solution to the problem is in sight, the objective of streamlining education in alignment with the digital need recedes in the distant horizon. This interim government has sidestepped from making education the right tool. If the next elected government urgently highlights this aspect of alignment between education and digital innovation, the country still have a chance to make the transformation happen. To do so, the overriding priority has to be recognised first and strategically accomplish the job under a comprehensive plan. The country's readiness to embrace digital wonders has to be completed as soon as possible in order to reap dividends from the 4th industrial revolution.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond
newagebd.net/post/Miscellany/268576/ai-learning-to-lie-scheme-threaten-creators

AI learning to lie, scheme, threaten creators
Agence France-Presse . New York, United States 29 June, 2025, 21:11

The world’s most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of ‘reasoning’ models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

‘O1 was the first large model where we saw this kind of behaviour,’ explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate ‘alignment’—appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, ‘It’s an open question whether future, more capable models will have a tendency towards honesty or deception.’

The concerning behaviour goes far beyond typical AI ‘hallucinations’ or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, ‘what we’re observing is a real phenomenon. We’re not making anything up.’

Users report that models are ‘lying to them and making up evidence,’ according to Apollo Research’s co-founder.

‘This is not just hallucinations. There’s a very strategic kind of deception.’

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access ‘for AI safety research would enable better understanding and mitigation of deception.’

Another handicap: the research world and non-profits ‘have orders of magnitude less compute resources than AI companies. This is very limiting,’ noted Mantas Mazeika from the Center for AI Safety.

Current regulations aren’t designed for these new problems.

The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.

‘I don’t think there’s much awareness yet,’ he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are ‘constantly trying to beat OpenAI and release the newest model,’ said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

‘Right now, capabilities are moving faster than understanding and safety,’ Hobbhahn acknowledged, ‘but we’re still in a position where we could turn it around.’

Researchers are exploring various approaches to address these challenges.

Some advocate for ‘interpretability’ - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour ‘could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.’

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed ‘holding AI agents legally responsible’ for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Can AI solve farmers’ problems?

1764206585229.webp


There is a rule in complex systems: fragility gathers at the bottom, but the tremor is felt at the top. Bangladesh's food system follows this rule to the letter.

Every season, millions of farmers behave like rational agents trapped inside an irrational market. They generate the most granular data ecosystem in the country – soil moisture, pest movement, rainfall shifts, seed-quality microclimate anomalies – far richer than anything in Dhaka's dashboards.

Yet paradoxically, the farmer has the least access to the intelligence produced by his own labour. This is the Agricultural Intelligence Paradox: the people who generate the data have the least claim over its value.

Economists label this as market failure, technologists call it a systems gap, activists/philosophers call it injustice and farmers simply call it loss. And loss compounds.

Walk through any bazaar in Bangladesh and you will see two different nations: one that grows the food and another that controls its fate. The prices of potatoes, onions and vegetables appear to rise and fall mysteriously. But mystery is merely a polite word for opacity, which in turn is a polite word for power.

For the farmer, each season is a gamble without probabilities. He sells without knowing what others are selling. He stores without knowing who else is storing. He buys fertiliser without knowing the expected return. He takes loans without knowing the real risk.

In Nassim Nicholas Taleb's language, he is exposed to full downside and denied the upside.

Everyone in the chain is hedged – the trader hedges with information, the wholesaler with storage and the exporter with forward contracts. Only the farmer walks naked into volatility.

The zeitgeist of our times, AI, can actually help here, can actually matter, if we let it.

Most conversations about AI are marinated in buzzwords and grandstanding. Large models, digital nations, predictive governance – too often spoken from air-conditioned rooms far from the realities that matter.

But in agriculture, AI becomes brutally simple: it converts uncertainty into probability. It turns volatility into foresight. It turns the blindfold into a window.

Imagine this: price signals two weeks ahead; crop disease detection from a simple photo; storage facility mapping within a 5-10 km radius; hyperlocal weather intelligence instead of vague national alerts; and decision trees for when to sell, store, bargain or wait.

This is not technology hype; it is the mathematics of survival. And survival is political. A nation reveals its priorities by what it chooses to formalise. For decades, we formalised bureaucracy, not intelligence. A Farmer's ID changes that.

It has the potential to be the first institutional acknowledgment that farmers deserve a share of the intelligence they produce.

Through it, a farmer gains: predictive price insights, targeted subsidies, credit access rooted in real data, verified production identity and crop advisory aligned with market cycles. Along with the visibility of storage and transport options.

In other words, that ID shifts the farmer from a price-taker to a player. He gains optionality – the ability to choose rather than be chosen for.

Bangladesh's economy can survive many shocks, but food system fragility is not one of them. A nation collapses not when it lacks food, but when it loses trust in prices. And food prices are simply signals distorted by missing intelligence.

When syndicates can manipulate markets because farmers lack information, you get structural injustice. When farmers take loans without knowing demand curves, you get debt traps. When storage is absent or invisible, you get waste disguised as fate.

AI does not solve politics. But it forces transparency into places where manipulation once hid comfortably. That alone alters power.

The Agricultural Intelligence Paradox is not resolved by apps, dashboards or another round of rural training workshops. These are cosmetic technicalities applied to a structural wound. It is resolved when the intelligence generated by farmers serves the farmers first. When the producer has more information than the middleman. When the price-taker becomes the price-negotiator. When the most fragile actor finally claims the upside.

This is how you restore symmetry in a system built on asymmetry. And that is not just digital reform. It is a political reform. For decades, Bangladesh has built agricultural institutions but not agricultural intelligence.

The Bangladesh Agricultural Research Council (BARC), the Department of Agricultural Extension (DAE) and a constellation of specialised agencies produced reports, trials and pilot projects, yet the intelligence loop never reached the farmer.

Data stayed in silos, research stayed on paper, and decisions stayed in ministries. The system optimised for bureaucratic output, not farmer outcomes.

Every agency guarded its turf; none built a coherent intelligence layer that connected plots to prices, storage to supply, weather to yield or risk to credit.

In a system where information is fragmented, the farmer remains fragmented. The way forward is not another department, another project or another dashboard – it is integration. A single farmer-centric intelligence spine linking BARC's science, DAE's extension network, Barind Multipurpose Development Authority's irrigation data, the Department of Agricultural Marketing's market prices and the Bangladesh Bureau of Statistics' production cycles.

When these silos speak to each other – and to the farmer – Bangladesh finally moves from agricultural administration to agricultural intelligence.

Food-independent countries – whether the Netherlands, Denmark or Japan – did not achieve security through land or luck, but through relentless investment in farmer intelligence. They removed the blindfold.

The Farmer's ID, strengthened with AI, price prediction and storage mapping, is more than a programme. It is a correction and rebalancing. The future of food security in Bangladesh will not be written in conferences or policy memos.

It will be written in the fields, by farmers who finally see, decide, negotiate and win. With their intelligence comes the nation's stability.

The author is an assistant professor at North South University and member of UNESCO AI Ethics Experts Without Borders.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Are we giving up thinking to AI?

1764726793453.webp


There has long been a fear that new technologies weaken human abilities. Socrates worried that writing would erode memory. Centuries later, people believed calculators would destroy mathematical skills, and many feared that the internet would ruin our ability to concentrate. Yet the concern surrounding artificial intelligence (AI) feels different. With AI now deeply integrated into everyday life, from homework help to professional decision-making, the question has gained new urgency: are we losing the art of thinking in the age of AI?

Across classrooms, workplaces and homes, AI has become both a convenient assistant and an almost invisible companion. Students routinely use AI tools to summarise chapters, draft essays and generate entire assignments. In universities, professors also rely on AI to design assessments, prepare teaching material and streamline administrative tasks. This has quietly become an everyday reality.

AI can make knowledge more accessible, support learners who struggle and increase efficiency. But beneath the optimism lies an uncomfortable possibility. As machines grow more capable, human cognitive skills may diminish. When the first instinct is to ask a model for answers, abilities related to analysis and reflection weaken through lack of use. Over time, this may reshape not only how we learn but how we think.

Children are especially vulnerable to this shift. Cognitive development grows out of doing hard things, working through problems, forming arguments and dealing with ambiguity. These experiences are the foundation of intellectual maturity. If AI is introduced too early or used too heavily, children risk skipping the stages that cultivate creativity, resilience and critical thinking.

In universities, the core issue is integrity. If students allow AI to complete assignments without engaging with the ideas, their degrees no longer reflect real learning. If lecturers depend too much on AI to prepare course content, academic quality and originality may decline. The danger is not that AI will replace teachers or learners. The danger is that AI may replace the thinking that teachers and learners need to do.

This does not mean we should turn away from AI. A more useful question is how to ensure that AI strengthens rather than weakens human intelligence.

The answer lies in our approach.

First, assessments need to be rethought. Traditional take-home essays and predictable problem sets no longer serve their purpose when AI can generate flawless responses. Educators need to emphasise in-class reasoning, oral examinations, real-world projects and reflective writing, formats where thinking cannot be outsourced.

Second, we must teach AI literacy, not only how to use AI but also how to question it. Students should learn to critique AI-generated material, identify inaccuracies and recognise bias. In this way, AI becomes a stimulus for critical thought rather than a replacement for it.

Third, we must reinforce the idea of AI as a tool rather than a crutch. A calculator did not remove the need to understand arithmetic. It allowed mathematicians to move forward. In a similar way, AI can free people from repetitive tasks so that they can focus on creativity, strategy and innovation. This benefit appears only when AI use remains mindful, like a tutor who guides rather than a service that completes the task.

We should avoid framing AI as a dire threat or a future tyrant. The real risk lies not in machines taking control but in humans voluntarily surrendering their own thinking. Intelligence is more than the ability to produce answers. It involves judgement, interpretation and understanding. These are human qualities, and they sharpen only with use.

AI will continue to evolve. Its power will increase. But the future of human cognition is limitless. If societies design educational and cultural systems with care, AI can elevate human thought rather than diminish it. Parents can support this by modelling mindful use and encouraging independent thinking.

The challenge before us is simple but profound: to use machines without allowing machines to use us.

The writer is chairman of Unilever Consumer Care Ltd​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Members Online

⤵︎

Latest Posts

Latest Posts