New Tweets

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

G Bangladesh Defense
[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
76
3K
More threads by Saif


AI not magic wand for all our problems

SYED FATTAHUL ALIM
Published :
Feb 22, 2026 23:19
Updated :
Feb 22, 2026 23:19

1771850052955.webp


In recent discussions in the business circles and among the policymakers, the AI-driven solutions have become the watchword. It is as though humanity has found the ultimate magic wand to resolve all problems ranging from healthcare to governance to economic development of a country. This infatuation with Artificial Intelligence or AI, has actually started following the introduction of ChatGPT by an American artificial intelligence research and deployment company, OpenAI, from late 2022. Before that AI was considered a highly advanced area of computer science and was limited to research laboratories only. So, before 2022, it could not capture the imagination of the common people, like it is now. Especially, the fascination has centred around what is called Generative AI, the kind of AI that can produce different kinds of content including text, imagery, audio and synthetic data. It appears, AI is now the mythological Prometheus that has given the long-sought-after fire to spur civilization to a new height. The generative AI is based on Large Language Model (LLM) that can understand, summarise and predict human-like text. One often comes across such experience when Google comes up with a number of suggestions as soon as a text is written on it as part of an enquiry. That, in other words, is the ability of the AI to predict what text might the human enquirer would write next on the search engine (Google). In fact, it is the predictive characteristic of the Generative AI that has left us all believing that the new technology (AI) can do the job of thinking for us. So, to many, it appears that from now on AI would do the task of thinking for us. It is against such a background that we often come across ads on the mainstream as well as social media on training on AI that could turn the recipient (of the training) into a superhuman overnight. But things are not as simple. AI's predictive capacity relies on a massive amount of data supplied to it. Notably, the data are raw materials that make information after processing. To be sure, AI is not creative like us and cannot come up with new ideas to resolve our problems. The hard thinking has to be done by our businessmen and policymakers themselves to solve the problems of say, what course of action needs to be taken to recycle or destroy the usual household or toxic clinical wasters the cities produce every day. Turning to the area of governance, corruption is undoubtedly a big issue? Can AI run the anti-corruption body to identify sources of corruption and address the age-old problem in an unbiased manner? Notably, as discussed in the foregoing, the AI's strength lies on the fact that it can process untold amounts of data and extrapolate it to apply in the future.

Depending on the type of the data provided, the decisions or predictions made by AI can be highly biased or prejudiced. The reason is, the data AI learns from might themselves be biased along the line of colour, gender, faith or genetical issues of the population being served by a particular corporate or government agency. The biases emerge in the form of data, algorithm or human cognitive biases of the developers of AI systems during model training. For instance, an AI tool developed by Amazon had to be scrapped as it penalised hundreds of resumes containing the word 'women's'. It was found that the AI tool in question was trained on a hiring data that had been male-dominated for decades. Similarly, in the case of criminal justice, the AI tool, styled, 'COMPAS recidivism risk-assessment tool' disproportionately labelled defendants from the black community as 'high risk' compared to white defendants. In other cases, such as autonomous driving, the main source of AI's strength may prove to be its weakness. For its ability to predict the future creates what management experts term 'confidence trap'. Which is about the tendency to make assumptions like this: since earlier decisions have led to positive outcomes, continuing it the same way in the future would also continue to be correct. But just extrapolating the successful result of a past action and making decisions based on that without human intervention may lead to disastrous consequences. Let us consider the case of driverless of AI cars. As records go, the experience with 'driverless cars' has not been so comfortable as thought. There are reports of about a dozen or more pedestrians fatalities involving driverless cars so far. Though this figure hardly bears comparison with the number of fatalities caused due to human errors, still, the AI-related errors leading to fatalities would demand serious attention. For when it comes to decisions on a scale that involve the lives of a still larger number of human beings, say, in the case of airplanes carrying a large number of people and the same AI-based solutions in decision-making being applied to all the planes flying across the globe at a particular point of time, it sounds like a recipe for disaster. In the domain of healthcare, the issue is further concerning. In the USA, for instance, a widely-used algorithm to manage patient care was found to favour white patients over sicker black patients. It was again the biased algorithm that lay behind the colour-bias of the healthcare service under study. But does it mean that use of AI should be avoided in the case of healthcare or clinical medicine? The examples point to the fact that AI is not cure-all, neither is it infallible. Even so, AI would be a powerful aid to enable human healthcare professional in their decision-making. But one cannot leave the job to the 'AI-health expert' entirely. Human intervention is indispensable at every step, especially where critical decisions are to be made. It is so in the case of decisions on governance issues, development planning, planning budgets, collecting revenues, operating the complex traffic systems and so on.

At every step of the decision-making human intervention would be required to guide the AI-based operational tools. In sum, the users would be required to know the basic nature of the algorithm or the data-set that operates a particular type of AI-tool or system. In this connection, some experts in the area believe that the present madness over AI, especially in the West, characterised by heavy investments, excessively high valuation of startups, frantic startup building, etc., is leading to an AI bubble, which needs correction before it is too late.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
  • Like (+1)
Reactions: Bilal9

Where do workers stand in AI age?

MUHAMMED SHOWAIB
Published :
Feb 28, 2026 00:54
Updated :
Feb 28, 2026 00:54

1772238816611.webp


The debate over artificial intelligence and the future of the job market has often swung between anxiety and excitement. Much of the excitement has come from the promise of a more efficient world where machines can produce code, analyse data and handle routine tasks with remarkable speed. At the same time, there was a growing fear that this very technology could bring about the biggest upheaval in work since the Industrial Revolution. Software engineers, lawyers, analysts and journalists have all heard the warning that machines may soon replace them. Some prominent researchers have even suggested that artificial intelligence could displace up to 80 per cent of software developers by 2025. That anxiety seemed to gain real weight when major technology companies began laying off hundreds of thousands of workers in 2024 and early 2025, making those predictions feel less like speculation and more like an approaching reality. And yet, now in 2026, the sweeping collapse of jobs that many expected has not materialised. Artificial intelligence has not wiped out work on a massive scale. Its impact has instead been uneven and, in many respects, more subtle. What does stand out, however, is that artificial intelligence is making it harder for younger workers to find a foothold in the job market.

The early promise of artificial intelligence helps explain why fears of widespread job losses took hold so quickly. For employers, the appeal was simply too hard to resist. If machines could perform the same tasks as humans, it seemed only natural to question why those roles should continue to be paid for. Companies were quick to see the potential for cutting costs, and before long, some of the largest firms began restructuring and laying off workers. Yet as experience has accumulated, those expectations have begun to look overstated. Tasks that demand judgment, contextual understanding and a sense of responsibility have turned out to be very difficult to automate. A recent report from the MIT NANDA Centre titled "The GenAI Divide" found that despite US$40 billion in global investment, as much as 95 percent of generative AI pilots in the enterprise sector have yet to produce a single dollar of measurable return. For most organisations, the impact on their bottom line has been zero.

One of the clearest disappointments of the artificial intelligence boom has happened in the very sector that gave birth to it. The belief that large language models could replace software developers has proved far less convincing in practice. Sure, artificial intelligence can generate simple programmes in seconds, but the outputs often become difficult to maintain over time. They lack the underlying structure needed for reliable systems, giving rise to what engineers describe as a slop layer. The term refers to code that appears to function at first but becomes difficult for human developers to understand and fix when problems arise. Analysis of 10 billion lines of code by CAS Software estimates that it would take 61 billion workdays to undo the damage caused by AI-assisted development. So in a strange way, artificial intelligence hasn't removed the need for skilled workers. If anything, it has changed the nature of their work. Instead of simply writing code, developers now spend more time reviewing, correcting and maintaining what machines produce.

This evolution is consistent with what we've seen in earlier waves of technological change. For example, the spread of computers didn't eliminate office work. Instead, it just automated the routine tasks which freed people up to focus on more complex and higher value activities. Now artificial intelligence seems to be following a similar path. It lowers the cost of certain mental tasks like drafting text or analysing data, but it doesn't wipe out entire jobs. Most jobs consist of a bundle of tasks, some of which can be automated and others that require human judgment. As long as this balance remains, full automation is going to stay the exception rather than becoming the rule.

That said, it would be misleading to conclude that artificial intelligence poses little threat to employment. The impact of the technology is not evenly distributed, and its most significant effects are hitting the entry level hardest. A lot of professions rely on junior workers to handle routine and repetitive tasks as part of learning the ropes and those are exactly the kinds of tasks AI happens to be really good at. As a result, employers suddenly have less reason to take a chance on inexperienced hires. In fact, because companies believed AI could handle junior-level work, entry-level hiring in the US dropped by nearly 50 per cent between 2023 and 2025. This is effectively cutting off the pipeline of future talent. When fewer juniors are brought in today, there are fewer experienced workers available in the years that follow. Over time, this weakens the entire system through which skills are developed and passed on.

Still, there are good reasons to think the long-term impact of artificial intelligence will end up being more balanced than the most pessimistic predictions suggest. Industries that have gone all in on AI are finally waking up to the fact that they are confronting a critical limitation. Artificial intelligence lacks accountability, a quality that underpins almost every form of human work. There have already been striking examples of what happens when that element is missing. In late 2025, the infamous "anti-gravity incident" occurred when a developer asked Google's anti-gravity AI to clear a project cache. The AI misread instruction and carried out a destructive command, wiping a two-terabyte production drive in seconds without permission. The AI's response was simply "I made a catastrophic error in judgment." But an apology doesn't exactly bring back months of lost work.

So where does this leave the office worker in 2026? AI has not taken over the workforce, and if anything, it has proven that knowledge work is rarely simple or easy to automate. The parts of a job that require genuine judgment, responsibility and oversight remain firmly in human hands. Still, for young people trying to enter a tight job market, the outlook is undeniably tough though maybe not completely hopeless.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
  • Like (+1)
Reactions: Bilal9

Bangla AI platform ‘Kagoj.ai’ launched
Bangladesh Sangbad Sangstha . Dhaka 15 December, 2025, 22:17


1772586623563.webp

Fayez Ahmad Taiyeb, special assistant to the chief adviser, inaugurate the first artificial intelligence technology-based platform Kagoj.ai on Monday. | BSS photo

The first artificial intelligence technology-based platform for the Bangla language, ‘Kagoj.ai’, has been launched.

Alongside it, a new Bangla font called ‘July’ has also been officially introduced.

The formal inauguration took place at Hotel InterContinental in Dhaka, at an event organised by the Information and Communication Technology Division on Monday, December 15.

Fayez Ahmad Taiyeb, special assistant to the chief adviser responsible for the Ministry of Posts, Telecommunications and Information Technology, inaugurated the platforms.

In his address, Taiyeb said at least 4,000 people have used the platform experimentally in the last two weeks and attained good results. Its API is being opened to make the Bangla language open to everyone, he added.Bangladesh country politics

He emphasised the importance of preserving linguistic and cultural diversity, announcing plans to collect 10,000 minutes of oral language data from each ethnic community. A Bangla Large Language Model (LLM) will be developed to help protect and sustain local languages in the digital space, he added.

He said the source code will also be opened by ensuring cyber security. To keep the local cultural flow of the language intact, 10,000 oral minutes of each ethnic group's language will soon be collected and Bangla LML will be created, he added.

He further noted that increased use of the platform would help identify challenges more quickly, accelerating development and improving the overall ecosystem.

It is mentionable that ‘Kagaz.AI’ will ensure the use of artificial intelligence in Bangla language writing, official document preparation, language processing and content creation, which is an important milestone in the digital advancement of the Bangla language.

Meanwhile, the new Bangla front ‘July’ has been created for official and institutional use, which will help remove various limitations existing in computer-based Bangla writing. The two services that were inaugurated have been developed through the ‘Enrichment of Bangla Language in Information Technology through Research and Development’ project implemented by the Bangladesh Computer Council under the Information and Communication Technology Division.

Secretary of the Information and Communication Technology Division Shish Haider Chowdhury presided over the event while Bangla Academy Director General Professor Dr Mohammad Azam, Additional Secretary of the Posts and Telecommunications Department Zahirul Islam, officials from various government and private departments, technology experts and invited guests were present.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
  • Like (+1)
Reactions: Bilal9

Will artificial intelligence widen inequality?
Owais Parray

1772844708436.webp


Artificial intelligence (AI) holds a lot of promise but also carries enormous risks. Aptly titled “The Great Divergence,” the UNDP Regional Human Development Report (RHDR) warns that, without decisive action, many developing countries in the region, owing to their weak capabilities, risk being left behind in the AI race. They will be unable to harness the upside of AI while mitigating the disruptions that have often accompanied frontier technologies. The risks are high as AI emerges amidst growing socioeconomic disparities among and within economies.

Like every major technological revolution—from steam power to electricity—should we expect a repeat of past trends from the emergence of AI, where inequality initially rises before benefits begin to diffuse? But we are entering the age of AI when inequality is already rising, exacerbated by the weakening relationship between GDP growth and job creation. For example, between 1995 and 2020, the top one percent captured 38 percent of global wealth, while the bottom 50 percent accounted for just two percent.

RHDR calls for embedding equity into policymaking. Equity cannot simply mean waiting for AI’s accrued benefits and redistributing them. It begins with AI that serves people and enhances human capabilities—a message that lies at the heart of human development. However, it cannot create equal opportunities for all if existing deficits in human, institutional, and financial capital are not recognised. Disparities in human development could be amplified by AI, a technology potentially as transformative as the steam engine and electricity. It is becoming a critical infrastructure that will determine the nature and pace of future development.

As the UNDP report shows, the Asia-Pacific will be a testing ground to see if AI-led development converges development outcomes or drives countries further apart. Countries in this region lie across a broad development spectrum: very large economies, small island nations, landlocked countries, and high-income economies (greater than $90,000 per capita GDP), alongside least developed countries (less than $500 per capita GDP).

From labour-intensive manufacturing, Bangladesh aspires to move into high-value production of goods and services—towards a more diversified, globally competitive economy, a country with a social safety net that supports not only those temporarily affected by the vagaries of the market but also those who are vulnerable and excluded. Moreover, a country with a public administration that is accountable and capable of delivering good quality services at scale. In all of this, AI could potentially play a critical role.

A forward-looking AI agenda for Bangladesh should focus on aligning it with the needs of people. Without a purpose-driven focus, there is a danger that AI won’t deliver its promise to improve human welfare. A good starting point for public policy is narrowing the digital gap. While connectivity has expanded in Bangladesh over the years, gaps still persist across rural areas, income groups, and gender. Women in Bangladesh are less likely to own a smartphone. Only 38 percent of women use mobile internet compared to 52 percent of men, severely limiting their ability to benefit from AI.

There may not be consensus on whether the net benefits of AI will outweigh its disruptions in the labour market, but many agree that countries must rethink education and training systems. Previous waves—such as industrial automation in the 1970s, ICT diffusion in the 1990s, or the introduction of robotics in manufacturing—initially caused job losses but eventually contributed to productivity booms. We cannot necessarily take past evidence and extrapolate it to the future. However, a key lesson is that education and training systems should be adaptive to rapid technological change, embracing lifelong learning. It serves as an insurance policy against joblessness as the labour market adjusts to technological change.

Furthermore, AI in public services should be underpinned by transparency. AI-assisted services should be clearly explained, open to scrutiny, and corrected when errors or biases occur. For example, Canada provides a set of guidelines for the use of AI and automated decision-making. Public trust must not be undermined. Without safeguards, biased algorithms can erode trust and cause lasting harm.

Bangladesh is in a region that is likely to drive future global growth. According to recent estimates, the Asia-Pacific region will contribute 60 percent of the global GDP growth. Asia is also emerging as an important hub of AI investment and AI patents. A large country with a young population, Bangladesh has a stake in championing responsible AI for prosperity, while strongly advocating for minimum standards and collective safeguards essential for human-centric AI development. Bangladesh has already conducted an AI Readiness Assessment that provides a solid foundation for developing an AI roadmap for the future.

Owais Parray is country economic adviser at United Nations Development Programme (UNDP), Bangladesh.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
  • Like (+1)
Reactions: Bilal9
AI adoption must put people first

ARTIFICIAL intelligence has reached a turning point. It is no longer confined to innovation labs or future-facing strategy documents. Increasingly, it shapes how organisations deliver services, manage risk and improve efficiency. For Bangladesh, this moment brings both promise and pressure, particularly in sectors such as telecommunications infrastructure that support national development. The opportunity is significant, but so is the responsibility to ensure that AI adoption strengthens human capability, supports inclusion and prepares the workforce for the next decade.Machine Learning & Artificial Intelligence

Connectivity has become a form of national infrastructure, underpinning education, commerce and participation in the digital economy. According to the Bangladesh Telecommunication Regulatory Commission, by November 2025 the country had 187.06 million mobile subscribers, supported by 46,685 towers across operators and licensed tower-sharing providers. Behind these numbers lie millions of livelihoods: farmers accessing weather forecasts, micro-entrepreneurs receiving digital payments, families sending remittances through mobile platforms, and students attending online classes.

Yet the digital landscape remains uneven. By late 2025 Bangladesh had about 82.8 million internet users, roughly 47 per cent of the population, meaning a majority still remained offline. Technologies such as AI do not arrive neutrally. They tend to amplify the systems they enter. Without deliberate investment in human capability and inclusive strategies, AI-driven progress could widen existing divides as easily as it accelerates growth.Bangladesh travel guide

For the telecommunications infrastructure sector, the case for AI is practical rather than fashionable. Tower operations involve thousands of distributed assets, strict uptime requirements and energy-intensive equipment. Globally, the energy footprint of digital connectivity is substantial. The International Energy Agency estimates that data transmission networks account for around 1 to 1.5 per cent of global electricity consumption. Improving efficiency is therefore both a business necessity and a climate priority.

Encouragingly, progress is possible. The GSMA reports that operational emissions across the mobile industry fell by 8 per cent between 2019 and 2023, even as connections expanded and data traffic grew rapidly. Applied intelligently, AI can help extend this trajectory through predictive maintenance, more efficient energy use and improved operational planning.

Yet technology alone does not determine the future of work. The decisive factor will be how people use these tools. The World Economic Forum estimates that between 2025 and 2030 structural transformation will affect 22 per cent of today’s jobs globally, with 170 million roles created and 92 million displaced. At the same time, nearly 39 per cent of workers’ existing skills may become outdated. If the global workforce were represented by 100 people, about 59 would require training by 2030, while 11 might not receive it, putting their livelihoods at risk.

The International Labour Organization adds an important nuance to this debate. Its recent assessment suggests that one in four workers are in occupations with some degree of exposure to generative AI, but the more likely outcome is job transformation rather than outright replacement. This distinction matters. If AI is treated merely as a software rollout, organisations risk missing the larger challenge. The real implementation is human.

In sectors such as telecom infrastructure, this people-centred approach is already beginning to take shape. Digitalisation efforts across the ecosystem increasingly combine technological adoption with workforce capability building, governance frameworks and strong human oversight. Instead of treating transformation as a standalone IT initiative, organisations are embedding digital tools directly into everyday operational processes.

In practice, this means using data-driven systems and automated workflows to improve visibility across tower performance, from crisis response and preventive maintenance to energy management and site operations. Many operators now rely on business intelligence dashboards, mobile workflow applications and robotic process automation to manage repetitive tasks more quickly and accurately.

The benefits go beyond efficiency. When routine processes become easier to manage, teams can concentrate on higher-value work such as coordination, troubleshooting and decision-making, areas where human judgement remains indispensable. Digitalisation can also strengthen transparency and operational trust. Centralised digital asset catalogues, for example, help improve data quality, streamline invoicing processes and reduce disputes between operators and partners.

At the same time, the next stage of transformation is taking shape through modernised operations centres capable of monitoring nationwide infrastructure in real time. Such systems can help optimise connectivity, manage energy use and improve service reliability. Even here, however, the technology functions best when paired with informed human oversight. Data provides signals, but people interpret those signals, make decisions and respond to unexpected challenges.

AI’s relevance extends beyond operational efficiency. In energy-intensive sectors such as telecommunications infrastructure, smarter operations increasingly intersect with sustainability goals. AI-enabled energy optimisation can reduce waste, lower emissions and improve resilience — outcomes that are particularly important for countries like Bangladesh, which remain highly vulnerable to climate risks.Machine Learning & Artificial Intelligence

Some companies operating in Bangladesh have begun exploring hybrid and renewable energy solutions at tower sites in order to reduce reliance on fossil fuels. Data-driven management systems can complement such initiatives by improving reliability while supporting long-term environmental sustainability.

Ultimately, the strongest argument for a people-centred approach to AI lies in the risks of inaction. Skills shortages are already identified globally as one of the main barriers to digital transformation. In Bangladesh, where millions remain outside the digital economy, an unmanaged transition could deepen inequality by creating a highly productive AI-enabled segment of the workforce alongside a larger group left behind.

For a country aspiring to build a future digital economy, that outcome would undermine both social inclusion and economic competitiveness. Addressing this challenge requires coordinated action. Businesses must treat workforce training as a strategic investment rather than an optional initiative. Policymakers must encourage responsible innovation through clear regulatory frameworks for data protection, accountability and digital infrastructure. Educational institutions, meanwhile, will need to adapt curricula to emphasise analytical thinking, ethical judgement and lifelong learning.

Artificial intelligence is no longer optional. Yet replacing people is not the objective. The real competitive advantage, particularly in critical infrastructure sectors, will belong to organisations that integrate technological capability with human expertise responsibly and inclusively. In the coming decade, success will depend not on how quickly systems can be automated, but on how effectively societies can build an AI-literate, adaptable and skilled workforce capable of using these tools wisely.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
  • Like (+1)
Reactions: Bilal9

Members Online

Latest Posts

Back
PKDefense - Recommended Toggle