Saif
Senior Member
- Joined
- Jan 24, 2024
- Messages
- 17,281
- Likes
- 8,334
- Nation

- Residence

- Axis Group

AI not magic wand for all our problems
In recent discussions in the business circles and among the policymakers, the AI-driven solutions have become the watchword. It is as though humanity has found the ultimate magic wand to resolve all problems ranging from healthcare to governance to economic development of a country. This infatuatio
AI not magic wand for all our problems
SYED FATTAHUL ALIM
Published :
Feb 22, 2026 23:19
Updated :
Feb 22, 2026 23:19
In recent discussions in the business circles and among the policymakers, the AI-driven solutions have become the watchword. It is as though humanity has found the ultimate magic wand to resolve all problems ranging from healthcare to governance to economic development of a country. This infatuation with Artificial Intelligence or AI, has actually started following the introduction of ChatGPT by an American artificial intelligence research and deployment company, OpenAI, from late 2022. Before that AI was considered a highly advanced area of computer science and was limited to research laboratories only. So, before 2022, it could not capture the imagination of the common people, like it is now. Especially, the fascination has centred around what is called Generative AI, the kind of AI that can produce different kinds of content including text, imagery, audio and synthetic data. It appears, AI is now the mythological Prometheus that has given the long-sought-after fire to spur civilization to a new height. The generative AI is based on Large Language Model (LLM) that can understand, summarise and predict human-like text. One often comes across such experience when Google comes up with a number of suggestions as soon as a text is written on it as part of an enquiry. That, in other words, is the ability of the AI to predict what text might the human enquirer would write next on the search engine (Google). In fact, it is the predictive characteristic of the Generative AI that has left us all believing that the new technology (AI) can do the job of thinking for us. So, to many, it appears that from now on AI would do the task of thinking for us. It is against such a background that we often come across ads on the mainstream as well as social media on training on AI that could turn the recipient (of the training) into a superhuman overnight. But things are not as simple. AI's predictive capacity relies on a massive amount of data supplied to it. Notably, the data are raw materials that make information after processing. To be sure, AI is not creative like us and cannot come up with new ideas to resolve our problems. The hard thinking has to be done by our businessmen and policymakers themselves to solve the problems of say, what course of action needs to be taken to recycle or destroy the usual household or toxic clinical wasters the cities produce every day. Turning to the area of governance, corruption is undoubtedly a big issue? Can AI run the anti-corruption body to identify sources of corruption and address the age-old problem in an unbiased manner? Notably, as discussed in the foregoing, the AI's strength lies on the fact that it can process untold amounts of data and extrapolate it to apply in the future.
Depending on the type of the data provided, the decisions or predictions made by AI can be highly biased or prejudiced. The reason is, the data AI learns from might themselves be biased along the line of colour, gender, faith or genetical issues of the population being served by a particular corporate or government agency. The biases emerge in the form of data, algorithm or human cognitive biases of the developers of AI systems during model training. For instance, an AI tool developed by Amazon had to be scrapped as it penalised hundreds of resumes containing the word 'women's'. It was found that the AI tool in question was trained on a hiring data that had been male-dominated for decades. Similarly, in the case of criminal justice, the AI tool, styled, 'COMPAS recidivism risk-assessment tool' disproportionately labelled defendants from the black community as 'high risk' compared to white defendants. In other cases, such as autonomous driving, the main source of AI's strength may prove to be its weakness. For its ability to predict the future creates what management experts term 'confidence trap'. Which is about the tendency to make assumptions like this: since earlier decisions have led to positive outcomes, continuing it the same way in the future would also continue to be correct. But just extrapolating the successful result of a past action and making decisions based on that without human intervention may lead to disastrous consequences. Let us consider the case of driverless of AI cars. As records go, the experience with 'driverless cars' has not been so comfortable as thought. There are reports of about a dozen or more pedestrians fatalities involving driverless cars so far. Though this figure hardly bears comparison with the number of fatalities caused due to human errors, still, the AI-related errors leading to fatalities would demand serious attention. For when it comes to decisions on a scale that involve the lives of a still larger number of human beings, say, in the case of airplanes carrying a large number of people and the same AI-based solutions in decision-making being applied to all the planes flying across the globe at a particular point of time, it sounds like a recipe for disaster. In the domain of healthcare, the issue is further concerning. In the USA, for instance, a widely-used algorithm to manage patient care was found to favour white patients over sicker black patients. It was again the biased algorithm that lay behind the colour-bias of the healthcare service under study. But does it mean that use of AI should be avoided in the case of healthcare or clinical medicine? The examples point to the fact that AI is not cure-all, neither is it infallible. Even so, AI would be a powerful aid to enable human healthcare professional in their decision-making. But one cannot leave the job to the 'AI-health expert' entirely. Human intervention is indispensable at every step, especially where critical decisions are to be made. It is so in the case of decisions on governance issues, development planning, planning budgets, collecting revenues, operating the complex traffic systems and so on.
At every step of the decision-making human intervention would be required to guide the AI-based operational tools. In sum, the users would be required to know the basic nature of the algorithm or the data-set that operates a particular type of AI-tool or system. In this connection, some experts in the area believe that the present madness over AI, especially in the West, characterised by heavy investments, excessively high valuation of startups, frantic startup building, etc., is leading to an AI bubble, which needs correction before it is too late.
SYED FATTAHUL ALIM
Published :
Feb 22, 2026 23:19
Updated :
Feb 22, 2026 23:19
In recent discussions in the business circles and among the policymakers, the AI-driven solutions have become the watchword. It is as though humanity has found the ultimate magic wand to resolve all problems ranging from healthcare to governance to economic development of a country. This infatuation with Artificial Intelligence or AI, has actually started following the introduction of ChatGPT by an American artificial intelligence research and deployment company, OpenAI, from late 2022. Before that AI was considered a highly advanced area of computer science and was limited to research laboratories only. So, before 2022, it could not capture the imagination of the common people, like it is now. Especially, the fascination has centred around what is called Generative AI, the kind of AI that can produce different kinds of content including text, imagery, audio and synthetic data. It appears, AI is now the mythological Prometheus that has given the long-sought-after fire to spur civilization to a new height. The generative AI is based on Large Language Model (LLM) that can understand, summarise and predict human-like text. One often comes across such experience when Google comes up with a number of suggestions as soon as a text is written on it as part of an enquiry. That, in other words, is the ability of the AI to predict what text might the human enquirer would write next on the search engine (Google). In fact, it is the predictive characteristic of the Generative AI that has left us all believing that the new technology (AI) can do the job of thinking for us. So, to many, it appears that from now on AI would do the task of thinking for us. It is against such a background that we often come across ads on the mainstream as well as social media on training on AI that could turn the recipient (of the training) into a superhuman overnight. But things are not as simple. AI's predictive capacity relies on a massive amount of data supplied to it. Notably, the data are raw materials that make information after processing. To be sure, AI is not creative like us and cannot come up with new ideas to resolve our problems. The hard thinking has to be done by our businessmen and policymakers themselves to solve the problems of say, what course of action needs to be taken to recycle or destroy the usual household or toxic clinical wasters the cities produce every day. Turning to the area of governance, corruption is undoubtedly a big issue? Can AI run the anti-corruption body to identify sources of corruption and address the age-old problem in an unbiased manner? Notably, as discussed in the foregoing, the AI's strength lies on the fact that it can process untold amounts of data and extrapolate it to apply in the future.
Depending on the type of the data provided, the decisions or predictions made by AI can be highly biased or prejudiced. The reason is, the data AI learns from might themselves be biased along the line of colour, gender, faith or genetical issues of the population being served by a particular corporate or government agency. The biases emerge in the form of data, algorithm or human cognitive biases of the developers of AI systems during model training. For instance, an AI tool developed by Amazon had to be scrapped as it penalised hundreds of resumes containing the word 'women's'. It was found that the AI tool in question was trained on a hiring data that had been male-dominated for decades. Similarly, in the case of criminal justice, the AI tool, styled, 'COMPAS recidivism risk-assessment tool' disproportionately labelled defendants from the black community as 'high risk' compared to white defendants. In other cases, such as autonomous driving, the main source of AI's strength may prove to be its weakness. For its ability to predict the future creates what management experts term 'confidence trap'. Which is about the tendency to make assumptions like this: since earlier decisions have led to positive outcomes, continuing it the same way in the future would also continue to be correct. But just extrapolating the successful result of a past action and making decisions based on that without human intervention may lead to disastrous consequences. Let us consider the case of driverless of AI cars. As records go, the experience with 'driverless cars' has not been so comfortable as thought. There are reports of about a dozen or more pedestrians fatalities involving driverless cars so far. Though this figure hardly bears comparison with the number of fatalities caused due to human errors, still, the AI-related errors leading to fatalities would demand serious attention. For when it comes to decisions on a scale that involve the lives of a still larger number of human beings, say, in the case of airplanes carrying a large number of people and the same AI-based solutions in decision-making being applied to all the planes flying across the globe at a particular point of time, it sounds like a recipe for disaster. In the domain of healthcare, the issue is further concerning. In the USA, for instance, a widely-used algorithm to manage patient care was found to favour white patients over sicker black patients. It was again the biased algorithm that lay behind the colour-bias of the healthcare service under study. But does it mean that use of AI should be avoided in the case of healthcare or clinical medicine? The examples point to the fact that AI is not cure-all, neither is it infallible. Even so, AI would be a powerful aid to enable human healthcare professional in their decision-making. But one cannot leave the job to the 'AI-health expert' entirely. Human intervention is indispensable at every step, especially where critical decisions are to be made. It is so in the case of decisions on governance issues, development planning, planning budgets, collecting revenues, operating the complex traffic systems and so on.
At every step of the decision-making human intervention would be required to guide the AI-based operational tools. In sum, the users would be required to know the basic nature of the algorithm or the data-set that operates a particular type of AI-tool or system. In this connection, some experts in the area believe that the present madness over AI, especially in the West, characterised by heavy investments, excessively high valuation of startups, frantic startup building, etc., is leading to an AI bubble, which needs correction before it is too late.
































