Home Watch Videos Wars Movies Login

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
73
3K
More threads by Saif

G Bangladesh Defense

AI not magic wand for all our problems

SYED FATTAHUL ALIM
Published :
Feb 22, 2026 23:19
Updated :
Feb 22, 2026 23:19

1771850052955.webp


In recent discussions in the business circles and among the policymakers, the AI-driven solutions have become the watchword. It is as though humanity has found the ultimate magic wand to resolve all problems ranging from healthcare to governance to economic development of a country. This infatuation with Artificial Intelligence or AI, has actually started following the introduction of ChatGPT by an American artificial intelligence research and deployment company, OpenAI, from late 2022. Before that AI was considered a highly advanced area of computer science and was limited to research laboratories only. So, before 2022, it could not capture the imagination of the common people, like it is now. Especially, the fascination has centred around what is called Generative AI, the kind of AI that can produce different kinds of content including text, imagery, audio and synthetic data. It appears, AI is now the mythological Prometheus that has given the long-sought-after fire to spur civilization to a new height. The generative AI is based on Large Language Model (LLM) that can understand, summarise and predict human-like text. One often comes across such experience when Google comes up with a number of suggestions as soon as a text is written on it as part of an enquiry. That, in other words, is the ability of the AI to predict what text might the human enquirer would write next on the search engine (Google). In fact, it is the predictive characteristic of the Generative AI that has left us all believing that the new technology (AI) can do the job of thinking for us. So, to many, it appears that from now on AI would do the task of thinking for us. It is against such a background that we often come across ads on the mainstream as well as social media on training on AI that could turn the recipient (of the training) into a superhuman overnight. But things are not as simple. AI's predictive capacity relies on a massive amount of data supplied to it. Notably, the data are raw materials that make information after processing. To be sure, AI is not creative like us and cannot come up with new ideas to resolve our problems. The hard thinking has to be done by our businessmen and policymakers themselves to solve the problems of say, what course of action needs to be taken to recycle or destroy the usual household or toxic clinical wasters the cities produce every day. Turning to the area of governance, corruption is undoubtedly a big issue? Can AI run the anti-corruption body to identify sources of corruption and address the age-old problem in an unbiased manner? Notably, as discussed in the foregoing, the AI's strength lies on the fact that it can process untold amounts of data and extrapolate it to apply in the future.

Depending on the type of the data provided, the decisions or predictions made by AI can be highly biased or prejudiced. The reason is, the data AI learns from might themselves be biased along the line of colour, gender, faith or genetical issues of the population being served by a particular corporate or government agency. The biases emerge in the form of data, algorithm or human cognitive biases of the developers of AI systems during model training. For instance, an AI tool developed by Amazon had to be scrapped as it penalised hundreds of resumes containing the word 'women's'. It was found that the AI tool in question was trained on a hiring data that had been male-dominated for decades. Similarly, in the case of criminal justice, the AI tool, styled, 'COMPAS recidivism risk-assessment tool' disproportionately labelled defendants from the black community as 'high risk' compared to white defendants. In other cases, such as autonomous driving, the main source of AI's strength may prove to be its weakness. For its ability to predict the future creates what management experts term 'confidence trap'. Which is about the tendency to make assumptions like this: since earlier decisions have led to positive outcomes, continuing it the same way in the future would also continue to be correct. But just extrapolating the successful result of a past action and making decisions based on that without human intervention may lead to disastrous consequences. Let us consider the case of driverless of AI cars. As records go, the experience with 'driverless cars' has not been so comfortable as thought. There are reports of about a dozen or more pedestrians fatalities involving driverless cars so far. Though this figure hardly bears comparison with the number of fatalities caused due to human errors, still, the AI-related errors leading to fatalities would demand serious attention. For when it comes to decisions on a scale that involve the lives of a still larger number of human beings, say, in the case of airplanes carrying a large number of people and the same AI-based solutions in decision-making being applied to all the planes flying across the globe at a particular point of time, it sounds like a recipe for disaster. In the domain of healthcare, the issue is further concerning. In the USA, for instance, a widely-used algorithm to manage patient care was found to favour white patients over sicker black patients. It was again the biased algorithm that lay behind the colour-bias of the healthcare service under study. But does it mean that use of AI should be avoided in the case of healthcare or clinical medicine? The examples point to the fact that AI is not cure-all, neither is it infallible. Even so, AI would be a powerful aid to enable human healthcare professional in their decision-making. But one cannot leave the job to the 'AI-health expert' entirely. Human intervention is indispensable at every step, especially where critical decisions are to be made. It is so in the case of decisions on governance issues, development planning, planning budgets, collecting revenues, operating the complex traffic systems and so on.

At every step of the decision-making human intervention would be required to guide the AI-based operational tools. In sum, the users would be required to know the basic nature of the algorithm or the data-set that operates a particular type of AI-tool or system. In this connection, some experts in the area believe that the present madness over AI, especially in the West, characterised by heavy investments, excessively high valuation of startups, frantic startup building, etc., is leading to an AI bubble, which needs correction before it is too late.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Where do workers stand in AI age?

MUHAMMED SHOWAIB
Published :
Feb 28, 2026 00:54
Updated :
Feb 28, 2026 00:54

1772238816611.webp


The debate over artificial intelligence and the future of the job market has often swung between anxiety and excitement. Much of the excitement has come from the promise of a more efficient world where machines can produce code, analyse data and handle routine tasks with remarkable speed. At the same time, there was a growing fear that this very technology could bring about the biggest upheaval in work since the Industrial Revolution. Software engineers, lawyers, analysts and journalists have all heard the warning that machines may soon replace them. Some prominent researchers have even suggested that artificial intelligence could displace up to 80 per cent of software developers by 2025. That anxiety seemed to gain real weight when major technology companies began laying off hundreds of thousands of workers in 2024 and early 2025, making those predictions feel less like speculation and more like an approaching reality. And yet, now in 2026, the sweeping collapse of jobs that many expected has not materialised. Artificial intelligence has not wiped out work on a massive scale. Its impact has instead been uneven and, in many respects, more subtle. What does stand out, however, is that artificial intelligence is making it harder for younger workers to find a foothold in the job market.

The early promise of artificial intelligence helps explain why fears of widespread job losses took hold so quickly. For employers, the appeal was simply too hard to resist. If machines could perform the same tasks as humans, it seemed only natural to question why those roles should continue to be paid for. Companies were quick to see the potential for cutting costs, and before long, some of the largest firms began restructuring and laying off workers. Yet as experience has accumulated, those expectations have begun to look overstated. Tasks that demand judgment, contextual understanding and a sense of responsibility have turned out to be very difficult to automate. A recent report from the MIT NANDA Centre titled "The GenAI Divide" found that despite US$40 billion in global investment, as much as 95 percent of generative AI pilots in the enterprise sector have yet to produce a single dollar of measurable return. For most organisations, the impact on their bottom line has been zero.

One of the clearest disappointments of the artificial intelligence boom has happened in the very sector that gave birth to it. The belief that large language models could replace software developers has proved far less convincing in practice. Sure, artificial intelligence can generate simple programmes in seconds, but the outputs often become difficult to maintain over time. They lack the underlying structure needed for reliable systems, giving rise to what engineers describe as a slop layer. The term refers to code that appears to function at first but becomes difficult for human developers to understand and fix when problems arise. Analysis of 10 billion lines of code by CAS Software estimates that it would take 61 billion workdays to undo the damage caused by AI-assisted development. So in a strange way, artificial intelligence hasn't removed the need for skilled workers. If anything, it has changed the nature of their work. Instead of simply writing code, developers now spend more time reviewing, correcting and maintaining what machines produce.

This evolution is consistent with what we've seen in earlier waves of technological change. For example, the spread of computers didn't eliminate office work. Instead, it just automated the routine tasks which freed people up to focus on more complex and higher value activities. Now artificial intelligence seems to be following a similar path. It lowers the cost of certain mental tasks like drafting text or analysing data, but it doesn't wipe out entire jobs. Most jobs consist of a bundle of tasks, some of which can be automated and others that require human judgment. As long as this balance remains, full automation is going to stay the exception rather than becoming the rule.

That said, it would be misleading to conclude that artificial intelligence poses little threat to employment. The impact of the technology is not evenly distributed, and its most significant effects are hitting the entry level hardest. A lot of professions rely on junior workers to handle routine and repetitive tasks as part of learning the ropes and those are exactly the kinds of tasks AI happens to be really good at. As a result, employers suddenly have less reason to take a chance on inexperienced hires. In fact, because companies believed AI could handle junior-level work, entry-level hiring in the US dropped by nearly 50 per cent between 2023 and 2025. This is effectively cutting off the pipeline of future talent. When fewer juniors are brought in today, there are fewer experienced workers available in the years that follow. Over time, this weakens the entire system through which skills are developed and passed on.

Still, there are good reasons to think the long-term impact of artificial intelligence will end up being more balanced than the most pessimistic predictions suggest. Industries that have gone all in on AI are finally waking up to the fact that they are confronting a critical limitation. Artificial intelligence lacks accountability, a quality that underpins almost every form of human work. There have already been striking examples of what happens when that element is missing. In late 2025, the infamous "anti-gravity incident" occurred when a developer asked Google's anti-gravity AI to clear a project cache. The AI misread instruction and carried out a destructive command, wiping a two-terabyte production drive in seconds without permission. The AI's response was simply "I made a catastrophic error in judgment." But an apology doesn't exactly bring back months of lost work.

So where does this leave the office worker in 2026? AI has not taken over the workforce, and if anything, it has proven that knowledge work is rarely simple or easy to automate. The parts of a job that require genuine judgment, responsibility and oversight remain firmly in human hands. Still, for young people trying to enter a tight job market, the outlook is undeniably tough though maybe not completely hopeless.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Members Online

Latest Posts

Back
 
G
O
 
H
O
M
E