Donate ☕
201 Military Defense Forums
[🇧🇩] - Artificial Intelligence-----It's challenges and Prospects in Bangladesh | Page 9 | PKDefense
Home Login Forums Wars Watch Videos
Serious discussion on defense, geopolitics, and global security.

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

Reply (Scroll)
Press space to scroll through posts
G Bangladesh Defense
[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
68
3K
More threads by Saif

Govt drafts AI policy to tap its potential, tackle concerns


1720572584409.webp



The government has formulated the draft National AI policy as it looks to make the best use of artificial intelligence to raise productivity and spur economic growth while dealing with the concerns presented by the technology spreading at a breakneck pace.

"This policy seeks to harness the benefits of AI while mitigating its risks, fostering innovation, and ensuring that AI technologies serve the best interests of the citizens and the nation as a whole," the draft said.

The Information and Communication Technology Division prepared the National AI Policy and published it recently.

The policy is expected to address the legal, ethical, and societal implications of AI effectively and efficiently.

It has placed a significant emphasis on public awareness and education, enlightening citizens about AI and its far-reaching benefits.

The objectives of the policy are to accelerate equitable economic growth and productivity through AI-driven optimisation, forecasting, and data-driven decision-making, and ensure efficiency and accessibility of public services through AI-enabled personalisation.

The draft comes as countries around the world race to prepare to deal with the changes being brought about by the fast-evolving technology.

The International Monetary Fund (IMF) has published its new AI Preparedness Index Dashboard for 174 economies, based on their readiness in four areas: digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation.

It showed Bangladesh's score stands at 0.38 compared to 0.49 of India, 0.37 of Pakistan, 0.35 of Nepal, 0.44 of Sri Lanka, 0.77 of the US, 0.64 of China, and 0.73 of Australia. Developed countries have a score of at least 0.7.

In Bangladesh, the government plans to adopt data-driven policy-making in every sector through AI-supported analytics and insights and nurture a skilled workforce that can utilise and build AI technologies.

It wants to embed AI in education and skills development so that the largely young population can meet the demands of the future.

The draft said the country will also foster a culture of AI research and innovation through public and private funding. It will ensure development and adhere to a robust ethical framework by establishing regulatory measures that uphold human rights in AI development and deployment.

The ICT Division, in collaboration with relevant ministries, industry, academia, and civil society, will take necessary steps to establish the institutional framework for the policy implementation, officials said.

It will set up an independent National Artificial Intelligence Center of Excellence (NAICE).

The NAICE will be responsible for coordination and monitoring of AI initiatives using key performance indicators and evaluation of AI initiatives' social, economic, and environmental impacts, guiding adjustments for maximum benefits and risk mitigation.

It will facilitate collaboration and knowledge-sharing among stakeholders, including government agencies, industry, academia, and civil society. It will ensure that any measures taken to regulate the technology are proportional to the risk and balanced to encourage innovation.

The government will form a high-level national AI advisory council to guide the implementation of sectoral AI initiatives.

The draft said the legal and regulatory frameworks are necessary for AI policy implementation.

The National Strategy for AI will be framed, and it will be updated every two years in accordance with the advancement of AI worldwide.

The strategy will include data retention policies, deal with the legal issues of data governance and ownership and focus on interoperability and data exchange.

According to IMF's economist Giovanni Melina, AI can increase productivity, boost economic growth, and lift incomes. However, it could also wipe out millions of jobs and widen inequality.

IMF's research has shown how AI is poised to reshape the global economy. It could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries.

But, on the brighter side, it also brings enormous potential to enhance the productivity of existing jobs for which AI can be a complementary tool and to create new jobs and even new industries.

Melina said most emerging market economies and low-income countries have smaller shares of high-skilled jobs than advanced economies, and so will likely be less affected and face fewer immediate disruptions from AI.

"At the same time, many of these countries lack the infrastructure or skilled workforces needed to harness AI's benefits, which could worsen inequality among nations."

The economist said the policy priority for emerging markets and developing economies should be to lay a strong foundation by investing in digital infrastructure and digital training for workers.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Without AI, Bangladesh risks falling behind: experts
BPO summit begins in Dhaka

1750554550499.webp


If business process outsourcing (BPO) companies in Bangladesh fail to adopt technologies like artificial intelligence, machine learning, and large language models, they will fall behind in global competition, experts warned today.

"Technological advancement in the past two years has surpassed all previous eras of innovation," said Faiz Ahmad Taiyeb, special assistant to the chief adviser with executive authority over the posts, telecommunications and ICT ministry.

"If companies cannot adapt to this transformation, they may shut down within two years… They will be eliminated by default," he said.

"Especially for IT and ITES companies, there is no room to survive without embracing change. This failure will not only harm businesses but also damage the country's competitiveness," he added.

Taiyeb was addressing the inauguration of a two-day BPO Summit Bangladesh 2025 at Senaprangan, Dhaka.

Organised by the Bangladesh Association of Contact Center and Outsourcing (BACCO), the event bore the theme "BPO 2.0: Revolution to Innovation" this year, signalling a shift towards innovation-driven growth in the industry.

Taiyeb urged BPO companies to swiftly assess what peer nations like China, India, Vietnam and the Philippines are doing in AI adoption.

"Only then can you approach the government with informed policy demands," he said.

He emphasised that IT engineers must understand sectoral challenges, as technology now permeates every industry.

"The way Chinese companies are leveraging generative AI and accelerating business process upgrades—if we fail to keep pace, we must identify these gaps and bring them to the government's attention," said Taiyeb.

Bangladesh has set a target to generate $5 billion from IT exports by 2030.

"Sri Lanka, one-tenth our size, has set a similar goal. Yet our current annual export hovers at around $700 million to $800 million. We must double our IT exports every year—this is a shared national challenge," he said.

Taiyeb recommended providing export incentives of 8 percent to 10 percent for frontier technologies like AI, while offering 4 percent to 5 percent for legacy segments.

"This ensures that new tech is prioritised without overburdening the government," he said.

He predicted that legacy call centres would disappear within five years, transforming into AI and large language model (LLM)-powered operations. "This sector must embrace transformation now."

BACCO President Tanvir Ibrahim said, "The BPO Summit is not just an industry event—it is a collective declaration of our confidence, capability and future aspirations. We believe this summit will help empower the youth with technology-driven employment and entrepreneurship opportunities."

Adilur Rahman Khan, adviser to the interim government on industry and housing & public works, attended as chief guest.

"The BPO sector is no longer just about outsourcing—it symbolises human resource development and economic transformation. The government will provide full support for its growth," he said.

ICT Secretary Shish Haider Chowdhury and BACCO Secretary General Faisal Alim also spoke.

This year's summit features nine international seminars and workshops, a job fair, special sessions on entrepreneurship, freelancer platforms, and a large exhibition with domestic and international BPO and tech companies.

Diplomats, tech experts and global buyers are attending.

A major attraction this year is an "Experience Zone", showcasing cutting-edge technologies including Starlink satellite internet, immersive AR and VR simulations themed on the July uprising, advanced drones and submarine technologies, and robotics exhibitions.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Shikho partners with Meta to launch AI literacy course in Bangladesh


1750985380767.webp


Shikho, the edtech platform, has recently announced a partnership with Meta to develop and deliver a new AI literacy course aimed at broadening public understanding of artificial intelligence (AI).

According to Shahir Chowdhury, Founder and CEO of Shikho, it will be a "first-of-its-kind course on AI literacy in Bangladesh."

"The course that we are planning with Meta will be available for free on our Shikho platform around October 2025. The course will be available in Bangla," said Shahir.

The announcement comes shortly before the expected public rollout of Shikho AI, a tool that offers instant, curriculum-aligned academic support to users across the country.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

The West’s AI dominance is the new colonialism

1751072201270.webp

VISUAL: FREEPIK

In the colonial era, empires extracted gold, rubber, and labour. Nowadays, data and cheap click-work are the new spoils. The West's AI boom thrives on digital colonialism, where personal data and low-wage labour from the Global South fuel its wealth. Silicon Valley firms harvest user information while outsourcing tasks to Nairobi and Manila, replicating colonial power dynamics.

The AI revolution mirrors past hierarchies, with the Global North monopolising benefits and the Global South as a raw material provider. Western dominance in AI is a new form of imperialism. The question is whether the Global South can fight back, and if so, how, before this digital subjugation deepens further?

Although it could be thought of by many as "the great equaliser," AI perpetuates old inequalities. Big Tech and Western nations are extracting value from the Global South in familiar ways. Tech giants steal tweets, photos, and voices from the Global South to feed their AI. People in places like Nairobi and New Delhi contribute vast amounts of data without compensation, while the flow of this data travels outward, much like the colonial trade routes that once drained resources from the periphery to the West.

In the Global South, millions of workers perform invisible labour for AI companies, earning as little as $1.50 per hour for tasks like moderating disturbing content or annotating data. In Kenya, India, and Latin America, AI's growth reinforces colonial-like dynamics, with low-paid workers bearing the brunt of exploitation. In Kenya, workers filter toxic content for OpenAI, while locals help train AI models for self-driving cars in Uganda and Nigeria.

Indian tech workers face similar conditions, often working as gig labourers without job security. Skilled researchers are syphoned off to Silicon Valley, mirroring colonial extraction.

Additionally, the Global South's dependence on Western AI platforms like OpenAI's GPT or Google's cloud services can also be seen as colonial economic dependency. Only five percent of Africa's AI talent have access to the computational power and resources needed to carry out complex tasks, known as compute. The African AI researchers lack access to sufficient computing resources, forcing them to rent American servers or use free tools, trapped in a cycle of subjugation.

AI colonialism goes beyond economics; it also involves language and cultural erasure. Big Tech's AI silences half the world's languages. They function best in English, Chinese, or European languages, while many African and South Asian languages are poorly represented. For example, until community-driven projects stepped in, languages like Hausa and Bangla had little AI support despite having millions of speakers. This neglect leads to AI colonising cognitive space, where a Western digital framework overshadows local cultures. AI perpetuates a Western worldview in subtle ways, ignoring diverse local realities.

In Latin America, the AI imbalance is visible in their importing of surveillance tech from Western firms. Countries like Brazil and Ecuador use facial recognition and biometric systems with little oversight, echoing the colonial past when foreign powers controlled domestic security. These examples expose a consistent pattern: the Global South provides labour, data, and markets, while Western firms retain control, profits, and power. AI's benefits flow upwards, while the burdens remain on those at the bottom, which is the age-old system of exploitation.

Major US companies like Google, Amazon, and Microsoft control most AI patents, cloud infrastructure, and models, creating dependency for the Global South. Western nations, especially the US, use their power to restrict access to critical AI resources through export controls on semiconductors and by blocking services like OpenAI in sanctioned countries. This is similar to colonial-era tactics, where industrial technologies were withheld from colonised nations to maintain control.

Ironically, the West often presents itself as the champion of AI ethics and human rights, but its enforcement of these principles is selective and self-serving. Tech companies promote ethical AI guidelines at home, only to look the other way when their products are used abusively abroad. For instance, Western firms with ethical codes supply surveillance AI to regimes with poor human rights records, profiting from opaque deals in the Global South. These companies have a responsibility to respect human rights, yet their tools frequently enable the oppression of journalists and dissidents. The West also dominates AI governance forums, sidelining Global South voices in shaping the rules of AI. This results in hypocritical rules written by the powerful, often imposed on developing nations with little consultation.

Despite this oddity, resistance is emerging. Grassroots projects like Mozilla's Common Voice are crowdsourcing speech data in underrepresented languages like Hausa and Tamil, allowing local developers to create tech that reflects their culture using their mother tongue. Karya, an ethical data company in India, is redefining AI labour by paying workers fair wages, granting them ownership of their data, and ensuring they benefit from its resale.

Countries are also uniting for AI cooperation, with the BRICS bloc and other emerging economies forming alliances to reduce dependency on Western tech. Although nascent, it is a world where no single empire controls AI. Data sovereignty laws are also gaining traction, as nations like Nigeria, Kenya, and Ghana pass legislation to keep personal data within their borders to boost their local tech industries. In Latin America, countries like Brazil and Chile are discussing "data localisation" as part of their digital strategies.

Open-source AI efforts, such as Chile's Latam GPT, tailored for Spanish, and Africa's Masakhane project aim to strengthen Natural Language Processing (NLP) and create inclusive and locally relevant AI models. These efforts are breaking Big Tech's monopoly and showcasing the Global South's potential to develop technology on its own terms.

The stakes in the AI struggle are already high as a new digital empire is being formed, dominated by a few nations and companies. However, the Global South has the numbers, talent, and moral authority to demand change. It must mainly push for regulations against data exploitation, secure a seat in AI governance, and build local capacity for independent innovation. Meanwhile, the West must confront the unsustainable nature of AI's colonial-style inequity. History shows empires fall when people resist. With collective action, AI can be for all, not just the privileged few.

Shaikh Afnan Birahim is a postgraduate student of Computing Science at the University of Glasgow in Scotland.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Meta launches ‘AI Live Skilling Program’ in Bangladesh

FE ONLINE DESK
Published :
Jun 28, 2025 15:05
Updated :
Jun 28, 2025 15:05


1751155004588.webp


Meta, in partnership with LightCastle Partners, an international management consulting firm, officially launched its Live Skilling initiative in Bangladesh.

Meta, the parent company of Facebook, launched the Live Skilling initiative through a high-level event titled ‘Enabling AI for Small Enterprises: Policy, Innovation, and Inclusion’ at the hotel Amari in the capital’s Gulshan area on Thursday.

The event convened key government officials, tech leaders, and private sector stakeholders to explore how responsible AI adoption can address national development priorities and strengthen Bangladesh’s digital economy, says a media release.

The event began with opening remarks from Ruici Tio, Regional Lead, Safety & Integrity Policy Programs (APAC) at Meta, who introduced Meta’s suite of AI tools and officially launched the Meta Live Skilling initiative in Bangladesh.

“We’re proud to announce the launch of Meta Live Skilling, with LightCastle Partners, to support businesses in Bangladesh with training and on-demand resources on Meta's AI tools,” said Simon Milner, Vice President of Public Policy, Asia-Pacific at Meta.

“This is part of a suite of initiatives we're launching in Bangladesh to support the AI ecosystem and unlock economic opportunity for businesses, no matter where they are along their growth journey.”

This was followed by a product demonstration by Shahir Chowdhury, CEO and Founder of Shikho, showcasing how Bangladeshi edtech companies are already using AI to transform digital learning and improve accessibility.

The centrepiece of the event was a multi-stakeholder panel discussion moderated by Bijon Islam, Co-founder and CEO of LightCastle Partners.

The distinguished panellists included Shihab Quader, Director General for SDGs, Government of Bangladesh; Sahariar Hasan Jiisun, National Consultant, Aspire to Innovate (a2i), ICT Division, Government of Bangladesh; Ruzan Sarwar, Public Policy Manager, Meta; Shahir Chowdhury, Founder & CEO, Shikho.

“For AI to truly serve small enterprises, our regulatory frameworks must be not just protective, but enabling,” said Shihab Quader. “Better regulation means clearer guidance, ethical guardrails, and space for responsible innovation to thrive. Small businesses need both confidence and opportunity to harness AI — our job is to make sure they have both.”

The panel explored how AI can help solve pressing economic challenges—from increasing SME productivity to improving service delivery—and what policy, infrastructure, and institutional alignment are needed to enable inclusive AI adoption across sectors.

More than 75 guests, including senior officials from government and private organisations a
ross sectors.

More than 75 guests, including senior officials from government and private organisations as well as technology experts, were present at the event.

Speakers also urged policymakers to address critical technological challenges and set priorities to fast-track national development through AI.

Following the panel, a fireside chat with Simon Milner was moderated by Oli Ahad, Founder and CEO of Dhaka AI Labs. The conversation spotlighted the role of open-source models, the importance of collaboration, and the promise of responsible AI innovation for driving long-term economic growth in Bangladesh.

The launch of Meta Live Skilling marks a significant step in Bangladesh’s AI journey, aimed at equipping local businesses, educators, and innovators with the tools and training needed to responsibly leverage AI for real-world impact.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

AI learning to lie, scheme, threaten creators
Agence France-Presse . New York, United States 29 June, 2025, 21:11

The world’s most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of ‘reasoning’ models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

‘O1 was the first large model where we saw this kind of behaviour,’ explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate ‘alignment’—appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, ‘It’s an open question whether future, more capable models will have a tendency towards honesty or deception.’

The concerning behaviour goes far beyond typical AI ‘hallucinations’ or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, ‘what we’re observing is a real phenomenon. We’re not making anything up.’

Users report that models are ‘lying to them and making up evidence,’ according to Apollo Research’s co-founder.

‘This is not just hallucinations. There’s a very strategic kind of deception.’

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access ‘for AI safety research would enable better understanding and mitigation of deception.’

Another handicap: the research world and non-profits ‘have orders of magnitude less compute resources than AI companies. This is very limiting,’ noted Mantas Mazeika from the Center for AI Safety.

Current regulations aren’t designed for these new problems.

The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.

‘I don’t think there’s much awareness yet,’ he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are ‘constantly trying to beat OpenAI and release the newest model,’ said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

‘Right now, capabilities are moving faster than understanding and safety,’ Hobbhahn acknowledged, ‘but we’re still in a position where we could turn it around.’

Researchers are exploring various approaches to address these challenges.

Some advocate for ‘interpretability’ - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour ‘could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.’

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed ‘holding AI agents legally responsible’ for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Members Online

⤵︎

Latest Posts

Latest Posts