Home Login Watch Videos Wars

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
69
3K
More threads by Saif

G Bangladesh Defense
Tiny tech, big AI power: what are 2-nanometre chips?
Agence France-Presse . Tokyo 31 December, 2025, 23:04

Taiwan’s world-leading microchip manufacturer TSMC says it has started mass producing next-generation ‘2-nanometre’ chips.

AFP looks at what that means, and why it’s important:

The computing power of chips has increased dramatically over the decades as makers cram them with more microscopic electronic components.

That has brought huge technological leaps to everything from smartphones to cars, as well as the advent of artificial intelligence tools like ChatGPT.

Advanced 2-nanometre (2nm) chips perform better and are more energy-efficient than past types, and are structured differently to house even more of the key components known as transistors.

The new chip technology will help speed up laptops, reduce data centres’ carbon footprint and allow self-driving cars to spot objects quicker, according to US computing giant IBM.

For artificial intelligence, ‘this benefits both consumer devices -- enabling faster, more capable on-device AI -- and data centre AI chips, which can run large models more efficiently’, said Jan Frederik Slijkerman, senior sector strategist at Dutch bank ING.

Producing 2nm chips, the most cutting-edge in the industry, is ‘extremely hard and expensive’, requiring ‘advanced lithography machines, deep knowledge of the production process, and huge investments’, Slijkerman told AFP.

Only a few companies are able to do it: TSMC, which dominates the chip manufacturing industry, as well as South Korea’s Samsung and US firm Intel.

TSMC is in the lead, with the other two ‘still in the stage of improving yield’ and lacking large-scale customers, said TrendForce analyst Joanne Chiao.

Japanese chipmaker Rapidus is also building a plant in northern Japan to make 2nm chips, with mass production slated for 2027.

TSMC’s path to mass 2nm production has not always been smooth.

Taiwanese prosecutors charged three people in August with stealing trade secrets related to 2nm chips to help Tokyo Electron, a Japanese company that makes equipment for TSMC.

‘This case involves critical national core technologies vital to Taiwan’s industrial lifeline,’ the high prosecutors’ office said at the time.

Geopolitical factors and trade wars are also at play.

Nikkei Asia reported this summer that TSMC, which counts Nvidia and Apple among its clients, will not use Chinese chipmaking equipment in its 2nm production lines to avoid disruption from potential US restrictions.

TSMC says they plan to speed up production of 2nm chips in the United States, currently targeted for ‘the end of the decade’.

Extremely tiny -- for reference, an atom is approximately 0.1 nanometres across.

But in fact 2nm does not refer to the actual size of the chip itself, or any chip components, and is just a marketing term.

Instead ‘the smaller the number, the higher the density’ of these components, Chiao told AFP.

IBM says 2nm designs can fit up to 50 billion transistors, tiny components smaller than a virus, on a chip the size of a fingernail.

To create the transistors, slices of silicon are etched, treated and combined with thin films of other materials.

A higher density of transistors results in a smaller chip or one the same size with faster processing power.

Yes, and TSMC is already developing ‘1.4-nanometre’ technology, reportedly to go into mass production around 2028, with Samsung and Intel not far behind.

TSMC started high-volume 3nm production in 2023, and Taiwanese media says the company is already building a 1.4nm chip factory in the city of Taichung.

As for 2nm chips, Japan’s Rapidus says they are ‘ideal for AI servers’ and will ‘become the cornerstone of the next-generation digital infrastructure’, despite the huge technical challenges and costs involved.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Proactive steps can future-proof jobs in the AI era

18 January 2026, 12:00 PM
By Dr Abdullah Shibli

1768785618017.webp


FILE VISUAL: REUTERS

There is a widespread fear across US campuses, businesses, and industries that the emergence of artificial intelligence (AI) and its rapid integration into all spheres of our lives will lead to job losses and a lifestyle where robots control our daily routines. However, further exploration of this notion shows that much of this projection is exaggerated, fed by our fascination with dystopian science fiction. In reality, AI will gradually make inroads into transportation, manufacturing, and agriculture; AI-driven robotics will replace routine tasks, provide health and other services, thus boosting financial growth.

My commentary is for the younger generation considering their career paths, today’s industry leaders evaluating investment opportunities for the future, and thought leaders in academia and AI research.

Let us take stock of the current situation in the US and other advanced countries. AI is poised to eliminate a variety of jobs, particularly those involving routine tasks, data analysis, and customer service, while roles requiring human judgement and emotional intelligence are likely to remain safe.

Microsoft developed an “AI applicability score” to measure how well AI can perform the core tasks of various jobs. Jobs with high scores are more likely to be transformed or replaced by AI technologies. Many of these jobs are in domains such as computer and mathematical fields, office and administrative support, and sales.

In a recent article in The New York Times, Sal Khan, the Bangladeshi founder of Khan Academy, wrote, “I believe artificial intelligence will displace workers at a scale many people don’t yet realize.” However, he also calls on business leaders to invest in retraining workers to adapt to new workplace technologies.

AI-enhanced humanoid robots and autonomous machines will be in high demand across warehouses, supply chains, transportation, and agriculture. In Bangladesh, many manual tasks, including irrigation, tilling, and fertiliser application, have seen the intrusion of mechanisation for decades.

Robotics in agriculture, often referred to as “agribots,” encompasses a range of automated technologies designed to improve farming practices. These robots perform essential tasks such as planting, harvesting, monitoring crop health, and managing livestock, significantly enhancing productivity and efficiency in modern agriculture.

While AI will enable us to work and learn better, there will be a growing need for skilled labour to operate and manage this new technology. The challenge our society, particularly our new generation, faces is adapting to the changing job market and economic ecosystem. Universities and industry leaders will now need to have a subcommittee to study and better understand the societal and economic impact of AI.

Microsoft itself is dedicating some energy to understanding the future of the workplace. It is providing considerable funding to graduate students engaged in research, market design, the economics of Artificial Intelligence (AI), economics and computation, social learning, applied microeconomics, microeconomic theory, and behavioural economics.

Sal Khan advised big companies to invest one percent of their profits in training their own workers to adapt to AI and robotics. He estimates that one percent of the combined profit of a dozen of the world’s largest corporations would create a $10 billion annual fund.

Now, let us turn to retraining the workers who will undoubtedly be affected by changes in the job market. The revolution brought about by AI and automation is fuelled by chipmakers such as Nvidia. In a recent interview, Jensen Huang, the founder and CEO of Nvidia, said that polls show AI-driven robots will be used for office and administrative support, domestic work, and repetitive tasks. However, robots will not be able to outperform humans in healthcare, classroom teaching, and scores of other jobs.

The US Bureau of Labor Statistics projects that nearly two million jobs will open up annually in the healthcare sector during the next decade. UNESCO estimates a global shortage of 44 million teachers by 2030. In the US, the construction industry needs more than 500,000 additional workers annually just to meet demand; meanwhile, openings for electricians and plumbers are growing faster than average. The hospitality and elder care industries—work rooted in empathy and human presence—are expanding, not shrinking.

Our teachers must adapt to the changing times and prepare our students for smart jobs. With advances in AI technology, many jobs will undergo transformation, and some roles will face a significant risk of automation. Workers in knowledge-intensive fields should be proactive in adapting to these changes, seeking opportunities for reskilling and embracing new technologies to remain competitive in the evolving job market.

At the 1 Billion Followers Summit 2026 in Dubai, educators, content creators, and learning designers called for a fundamental shift in how education is delivered. Education must pivot from rote learning to skill-based development (problem-solving, digital literacy) to equip graduates for this evolving landscape, focusing less on degrees and more on practical workplace readiness, as AI displaces some roles while boosting productivity.

Turning to agriculture, it is a sector that is crucial in meeting the food demand for a growing population projected to reach 9.7 billion people by 2050. Robots can operate continuously, performing tasks faster and more accurately than human labour, which is in short supply anyway. While the integration of robotics in agriculture presents numerous advantages, challenges such as high initial costs and the need for skilled operators exist. Any potential job displacement can be addressed by strengthening the skilled crafts and trades required to sustain the AI-driven economy.

The goal of universities should be to move away from a mindset focused solely on producing more graduates and towards a system that equips students with workplace skills. To do that, educational reform must transition immediately from exam-based assessment to skill-based learning.

Dr Abdullah Shibli is an economist and academic, currently working with a non-profit fiscal intermediary in Boston, US. He previously worked for the World Bank and Harvard University.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Staying human in the age of AI

By Nadia Jahan

1769045369576.webp

Image: Esma Melike Sezer/ Unsplash

AI has slipped into daily life with a kind of stealth. One moment you are using it to tidy up an email or translate a paragraph, and the next you are letting it outline your presentation, draft your report, suggest your next move, even tell you what you feel. The shift is not just about new software. It is about habits. In a country where young people are under relentless pressure to compete, save time and sound polished, the temptation is obvious: delegate as much as possible, move faster than everyone else, and let the machine take the strain.

But there is a cost to handing over too much. The more we outsource, the more we risk hollowing out the very qualities that make us employable, resilient, and alive to one another. Staying human in the age of AI means knowing when to use the tool and when to step back from it, not out of nostalgia, but because some parts of life only work when we do them ourselves.

There is an easy misunderstanding about AI that makes over-delegation feel harmless. We treat it like a calculator for words, a neutral device that simply speeds up what we already know. Yet many AI systems do more than compute. They generate. They suggest. They complete our thoughts for us, often in a tone that sounds confident and coherent. That can create the illusion of competence even when the underlying thinking is thin. If we accept that illusion too often, we begin to live in a world where sounding right matters more than being right, and where the first draft becomes the final one.


1769045396218.webp


Image: Giingerann/ Unsplash

The first thing we lose is the muscle of judgement. Writing a message, shaping an argument, or making a decision is not only about producing an output. It is about weighing what matters, anticipating how it will land, and taking responsibility for the consequences. When you let AI do the heavy lifting every time, you may still get something workable on the page, but you gradually weaken the inner sense that tells you what is true, what is fair, what is missing, and what does not sound like you. That sense is slow to build and easy to erode.

There is also a practical risk: dependency makes people fragile. AI tools can be wrong, inconsistent, or strangely generic. They can flatten nuance, misunderstand context, and reproduce patterns that are common rather than correct. If you have not practised doing the work yourself, you cannot reliably catch the errors. You also struggle when the stakes rise: when a client challenges a claim, when an interviewer asks you to explain your reasoning, when you have to negotiate, persuade, or improvise in real time. In those moments, there is no prompt that can replace a well-trained mind.

The second thing we lose is originality. Not in the grand sense of artistic genius, but in the everyday sense that your work carries a trace of your experience: your curiosity, your humour, your way of seeing. AI can imitate styles and remix familiar patterns, which is exactly why it can be useful for routine tasks. But if you let it write everything, you end up speaking in borrowed rhythms. You become less memorable. You become easier to replace.

This is where the so-called “human touch” becomes more than a sentimental phrase. In competitive workplaces and crowded markets, the human touch is often the differentiator.


It is the ability to listen properly to what someone is asking, to sense what they are not saying, to respond in a way that makes them feel understood rather than processed. It is empathy, timing, judgement, tact. It is also taste: knowing what to leave out, when to simplify, when to insist on complexity, when to be firm, when to be kind. AI can help with drafts and options, but it cannot fully replace the lived intelligence that comes from being in the world, paying attention, and caring about consequences.


1769045429128.webp




In Bangladesh, this matters because so much opportunity depends on relationships. Whether you are pitching a client, working in a team, running a small business, freelancing online, or building a startup, trust is the currency. Trust grows through consistency and human presence. It grows when you show up, reply thoughtfully, keep your word, and treat people as people. If AI encourages a culture of shortcuts where every message is a template and every interaction is optimised for speed, trust becomes harder to earn. You might respond faster, but you can sound less real.

The deeper danger is that over-delegation does not stop at work. It creeps into the personal. When people use AI to avoid awkward conversations, to manage emotions, to write apologies, to craft romantic messages, to mediate conflicts, they may feel relief in the moment. But avoidance has a price. Relationships are not built through perfect phrasing. They are built through vulnerability, patience, and the willingness to sit with discomfort. If you outsource the difficult parts of being with other people, you do not develop the skills that make intimacy possible.

That is why social skills are not a soft extra in the age of AI. They are a survival skill. As machines get better at routine cognitive output, what remains valuable is what machines cannot do in the same way: build rapport, read a room, resolve conflict, motivate a team, mentor someone younger, earn a customer’s loyalty, handle criticism without collapsing, and communicate under pressure. These skills have always mattered. Now they matter more, because they are harder to automate and because they protect us from turning ourselves into something machine-like.

1769045448886.webp


Image: Michaelle Daoust/ Unsplash

The irony is that technology often makes social skills feel optional. When you can text instead of call, when you can order without speaking, when you can work remotely and never meet your colleagues, you can go through days with minimal human friction. AI takes this further by offering a substitute for interaction: an entity that always responds, never gets tired, and rarely pushes back. If we are not careful, we start to prefer that frictionless exchange to real relationships, which are messy and demanding. Over time, the preference becomes a habit, and the habit becomes a way of life.

This is how we risk mechanising ourselves. Not because machines become human, but because humans begin to adopt the machine’s logic. We optimise everything. We minimise effort. We reduce conversation to transactions. We treat people as obstacles or opportunities, not as complex beings. We choose the easiest route rather than the most meaningful one. When enough individuals do this, society becomes colder. Loneliness rises. Trust falls. Even success feels strangely thin.

Staying human, then, is partly a matter of deliberate resistance. It means choosing, again and again, to practise what AI makes easy to avoid.

It means writing sometimes without assistance, so you can hear your own voice and strengthen your ability to think through language. It means doing mental work slowly enough to understand it, rather than producing answers quickly enough to move on. It means reading deeply rather than skimming summaries, because attention is a form of respect, and because complex problems cannot be solved with shallow understanding.

It also means making extra efforts to protect human-to-human connection in a world that quietly erodes it. Call a friend instead of sending a perfectly composed message. Sit with someone in person even when it is inconvenient. Ask questions you cannot outsource. Listen without planning your next reply. Join communities that are not about productivity: sports clubs, volunteer groups, study circles, cultural events, neighbourhood networks. These are not distractions from the future. They are part of what makes any future worth living in.

For young people especially, there is a temptation to treat social skills as secondary to technical skills. Learn the tools, build the portfolio, collect the certificates, and the rest will follow. But the person who thrives in an AI-shaped economy will often be the one who can combine competence with connection. The future belongs to people who can use machines without becoming machine-like: who can collaborate across differences, communicate clearly, negotiate fairly, and keep a sense of purpose bigger than optimisation.

None of this requires rejecting AI. It requires putting it in its place. AI is best understood as an amplifier. Used wisely, it can amplify your learning, your productivity, your creativity. Used carelessly, it amplifies your laziness, your dependence, your isolation. The difference is not the tool. It is the human using it.

The point of staying human is not to prove you can do everything the hard way. It is to protect what only humans can do well: meaning-making, moral judgement, genuine care, solidarity, courage. These are not romantic ideals. They are practical advantages in a volatile world. They help people adapt, recover, cooperate, and build institutions that last.

In the coming years, Bangladesh’s young people will be told, repeatedly, that the future belongs to those who embrace AI. That is true, in a narrow sense. But the broader truth is that the future belongs to those who embrace people. The real challenge is not learning to prompt a machine. It is learning to remain fully human while you do.

Nadia Jahan is a development communications professional based out of Dhaka.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

AI is becoming a companion for some teens, and that’s a problem
29 January 2026, 16:13 PM

By Ayman Anika

1769737275054.webp

Photo: Collected / bertellifotografia / Pexels

Chatbots were originally designed to answer questions and help with tasks, but an unexpected trend is emerging: many teenagers are treating these systems as companions. What started as curiosity is turning into what experts describe as a form of AI addiction — not in the clinical sense yet, but as a pattern of reliance that can interfere with daily life.

This isn’t just about teenagers preferring technology over face-to-face interaction. It’s about the way AI is being used to fill emotional gaps. Teens are spending hours chatting with AI programs, not just for help with homework but for emotional support, encouragement, and social interaction. Some report talking to AI more than they talk to real friends.

The appeal is understandable. AI doesn’t judge, doesn’t interrupt, and is always available. For teens who feel misunderstood, lonely, or socially anxious, a chatbot can feel like a safe space.

1769737303310.webp

Photo: Collected / pavel danilyuk / Pexels


Providers of these technologies design them to be responsive, engaging, and empathetic, which only strengthens the habit. A conversation that begins with a homework question can easily turn into an emotional outlet.

But that convenience comes with a cost. A 2024 survey linked to trends in AI use shows that when children lean on technology instead of people, it can affect emotional development, school performance, and real-world relationships.

Because chatbots can mimic empathy without actually experiencing it, they give teens a sense of connection that isn’t grounded in mutual understanding. That can make real relationships feel harder, or less rewarding.

Experts point out that habitual overuse of any technology — in this case AI — starts to look like more than just a habit. Patterns include prioritising chatbot interaction over friendships, hiding usage from loved ones, and becoming distressed when access is restricted. These behaviours mirror what psychologists recognise as problematic tech use, similar to excessive gaming or social media addiction.

Parents can play a crucial role in recognising this early. Signs include sudden mood shifts when screen time is limited, declining interest in offline activities, and frequently using AI late at night. Conversations about technology often focus on safety and screen time limits, but with AI, the discussion needs to include how and why teens are using it.


Opening a dialogue without judgment helps teens reflect on their own habits. Rather than banning AI outright, what matters more is helping them find balance and understanding the difference between supportive use and dependency.

1769737330641.webp

Photo: Collected / bertellifotografia / Pexels


Setting clear limits and routines can help. Experts suggest designated tech-free hours, encouraging in-person interaction, and promoting hobbies unrelated to screens. Some families choose to check in daily about how technology made a teen feel — not just how long they used it. That shift from quantity to quality of use aligns technological tools with healthy emotional development.

There’s no simple fix. AI is here to stay, and for many it will be a useful resource. But when reliance on AI begins to replace real human connection or interferes with daily life, parents and caregivers need to pay attention. Helping teens navigate this new terrain is as much about emotional guidance as it is about digital rules.

AI doesn’t have to be the problem. Without awareness and limits, it can become a substitute for the support and connection that only real human relationships can provide.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Fact Check Respond

Members Online

Latest Posts

Back
PKDefense
G
O
 
H
O
M
E