DeepTech Archives - Tech Research Online Fri, 21 Mar 2025 16:59:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://techresearchonline.com/wp-content/uploads/2024/05/favicon.webp DeepTech Archives - Tech Research Online 32 32 Beijing Backs AI Startup Manus as China Hunts for the Next DeepSeek https://techresearchonline.com/news/beijing-ai-startup-manus/ Fri, 21 Mar 2025 16:59:25 +0000 https://techresearchonline.com/?post_type=news&p=14081 China is fastening its artificial intelligence (AI) plans as Beijing AI startup Manus receives strong backing from local authorities and investors. According to Reuters, the Chinese AI startup registered its AI assistant and made an appearance on state media broadcast on Thursday. This development comes in the wake of China’s increasing interest in developing domestic […]

The post Beijing Backs AI Startup Manus as China Hunts for the Next DeepSeek appeared first on Tech Research Online.

]]>
China is fastening its artificial intelligence (AI) plans as Beijing AI startup Manus receives strong backing from local authorities and investors. According to Reuters, the Chinese AI startup registered its AI assistant and made an appearance on state media broadcast on Thursday.

This development comes in the wake of China’s increasing interest in developing domestic AI players that can compete on the global stage. After China’s Deepseek AI launched, the Chinese investors are looking for the next domestic AI startup.

Manus emerges at a time when China’s AI investment is on the rise, as the nation is growing its efforts to gain technological independence. With strong backers and government patronage, Manus is one of the leading Chinese AI startups that is concentrating on creating new generative AI models and products.

Some say that the Chinese startup went viral after it claimed to be the world’s first general AI agent. The new AI assistant is capable of making decisions and executing tasks autonomously. It also requires less prompts in comparison to ChatGPT and DeepSeek.

China’s investment in AI startups such as Manus is indicative of a wider strategy to diminish reliance on overseas technologies while fortifying its role as a world leader in the AI race. The Chinese government has made AI innovation a top priority, regarding it as an important driver of future economic and national competitiveness.

AI Startup Manus Gains Momentum as China Eyes the Next DeepSeek

Backed by the government, AI startup Manus is setting itself up as a possible challenger to current AI leaders. Manus specializes in large language models, generative AI tools, and enterprise-level AI solutions. Analysts see Manus poised to tread the success route of DeepSeek AI in China, which made waves globally for its cutting-edge AI products and explosive growth.

Manus’s swift ascent is testament to Beijing’s strategic emphasis on developing a strong AI ecosystem capable of creating global tech leaders. As China strives for its own alternatives to American and European AI titans, startups such as Manus are key to determining China’s AI future.

Competition Intensifies Among Top AI Startups in China

The success of DeepSeek AI in China has inspired a new wave of innovation and competition among top AI startups in China. Companies are running to develop the most advanced AI models, secure strategic partnership and attract significant investment.

The active support of Beijing for the Manus also highlights the government’s commitment to create an environment for AI innovation. This includes favorable policies, funding programs and access to large datasets, which are aimed at accelerating the development of homegrown AI capabilities.

China’s AI Ambitions Gain Strength with Manus

Beijing municipal government on Thursday said the Chinese version of Manus’ previous AI assistant, Monica, has passed the required registration for generative AI apps in China, a crucial regulatory hurdle.

All generative AI apps in China are required to adhere to strict rules to avoid the creation of content that is sensitive or harmful, as defined by the authorities.

In addition, last week, Manus announced a strategic collaboration with the group behind Alibaba’s AI models, solidifying its place in the AI ecosystem.

The post Beijing Backs AI Startup Manus as China Hunts for the Next DeepSeek appeared first on Tech Research Online.

]]>
French Startup Pasqal Partners with Nvidia to Speed-Up Development of Quantum Computing Applications https://techresearchonline.com/news/pasqal-nvidia-quantum-computing-partnership/ Fri, 21 Mar 2025 16:52:19 +0000 https://techresearchonline.com/?post_type=news&p=14078 French quantum computing startup Pasqal has partnered with Nvidia to give its clients access to additional tools for developing quantum applications, Yahoo Finance reported. The collaboration between the two tech companies is expected to boost development of quantum applications. System Integration Pasqal was founded in 2019. The startup says it has already raised over $151.8 […]

The post French Startup Pasqal Partners with Nvidia to Speed-Up Development of Quantum Computing Applications appeared first on Tech Research Online.

]]>
French quantum computing startup Pasqal has partnered with Nvidia to give its clients access to additional tools for developing quantum applications, Yahoo Finance reported. The collaboration between the two tech companies is expected to boost development of quantum applications.

System Integration

Pasqal was founded in 2019. The startup says it has already raised over $151.8 million in finance to date. On March 20, Pascal announced that it had integrated its quantum computing systems with Nvidia’s open-source platform, CUDA-Q to fast-track the process of developing quantum programs in high-performance environments.

“Our collaboration with NVIDIA will enable us to offer a much-requested interface and programming model for high performance computing and wider quantum community and ultimately accelerate the development of quantum applications,” Pasqal CEO Loic Henriet said.

Pascal has been participating in Nvidia’s startup inception programs. Pascal quantum computing’s integration with CUDA-Q will complement Nvidia’s current open-source library called Pulser. The library allows for custom experiments on neutral-atom devices. Through CUDA-Q, researchers can integrate AI supercomputers with quantum processing units from startups like Pasqal.

Nvidia’s Senior Director of CAE, Quantum and CUDA-X Tim Costa says this seamless integration allows researchers to achieve breakthroughs in quantum computing innovations. Pascal sees the Nvidia partnership as instrumental in boosting capabilities of its cloud computing platform.

Nvidia’s Approach to Quantum

Nvidia’s perspective on quantum computing varies from other big tech’s views. The chip maker does not see a quantum race to develop the largest computer. Instead, the company sees it as a shared infrastructure challenge that requires accelerated computing, deep integration, and scaled collaboration.

This perspective was evident during the GTC event held earlier this year. Throughout the event, Nvidia made it clear that it would stay away from the quantum hardware race and instead focus on developing systems and tools to enable others to go further faster. The US AI chip manufacturer has adopted a similar approach in other tech areas like autonomous vehicles and robotics.

“We don’t build our own self-driving car, but we help everyone else who does. We don’t build our own robots, but we help everyone else who does. We don’t build our own quantum computer, but our mission is to bring AI and accelerated computing to help everyone else who does,” Nvidia’s Group Product Manager for Quantum Computing, Sam Stanwyck said in a recent interview.

Nvidia does very well in AI and accelerated computing, and in embedding this power into wider ecosystems, Stanwyck added as he underscored the company’s main objective
“We are an accelerated computing company, and we see quantum as an important part of the future of accelerated computing,” he said.

Nvidia sees its role in quantum hardware as that of reducing bottlenecks, fast-tracking error correction, and facilitating quantum classical workflows.

Nvidia’s Quantum Stocks Impact

Nvidia CEO Jensen Huang’s comments on how long the world might have to wait before quantum computers become a reality sent quantum stocks plunging earlier this year.

“If you said 15 years for very useful quantum computers, that would probably be on the early side. If you said 30, it’s probably on the late side. But if you picked 20, I think a whole bunch of us would believe it,” Huang said during the Consumer Electronics Show event in January of this year.

Huang has since changed his thoughts about this timeline, expressing shock that his comments affected the markets.

“My first reaction was, I didn’t know they were public. How can a quantum company be public,” said.

Huang apologized for his comments during Nvidia’s Quantum Day event on March 20. However, stocks of leading quantum computing companies like D-Wave Quantum, Rigetti Computing, and Quantum Computing remained in the red.

The Nvidia CEO says that “quantum computing has the potential, and all of our hopes that it will deliver extraordinary impact, but the technology is insanely complicated.”

The post French Startup Pasqal Partners with Nvidia to Speed-Up Development of Quantum Computing Applications appeared first on Tech Research Online.

]]>
Indian Generative AI Company Sarvam AI Unveils Full-Stack GenAI Platform for Multilingual Users https://techresearchonline.com/news/sarvam-ai-generative-ai-platform/ Tue, 18 Mar 2025 15:36:39 +0000 https://techresearchonline.com/?post_type=news&p=13976 In a groundbreaking development for the Indian technology landscape, Sarvam AI is making headlines as it works towards developing India’s first homegrown generative AI. According to Business Today. This Bangalore-based startup is setting ambitious goals to develop an AI model to meet the country’s diverse linguistic needs. Founded by Dr. Vivek Raghavan and Dr. Pratyush […]

The post Indian Generative AI Company Sarvam AI Unveils Full-Stack GenAI Platform for Multilingual Users appeared first on Tech Research Online.

]]>
In a groundbreaking development for the Indian technology landscape, Sarvam AI is making headlines as it works towards developing India’s first homegrown generative AI. According to Business Today.

This Bangalore-based startup is setting ambitious goals to develop an AI model to meet the country’s diverse linguistic needs. Founded by Dr. Vivek Raghavan and Dr. Pratyush Kumar, Sarvam focuses on AI technology that represents India’s cultural and language diversity.

Global investors quickly recognized the potential of Sarvam AI. The startup raised a Series A investment of $41 million in December 2023, led by Lightspeed, and with participation from Peak XV Partners and Khosla Ventures. This investment propelled Sarvam AI to become among India’s top AI startups, reinforcing its ability to construct population-scale AI solutions tailored to India’s multi-linguistic richness.

By 2024, Sarvam AI launched a full-stack Generative AI platform with five major products — Sarvam Agents, Sarvam 2B, Shuka 1.0, Sarvam Models, and A1. The products were created to promote the adoption of AI in Indian languages and increase accessibility across the nation.

Sarvam AI Leads the Charge in Open-Source AI Models in India

Being one of the new AI startups in India, Sarvam AI is dedicated to fostering transparency and access to artificial intelligence. Among its initiatives is developing open-source AI models in India. Through its open-source strategy, the startup seeks to create a collaborative system where researchers, developers, and institutions can contribute and gain from these AI developments.

Global investors quickly recognized the potential of Sarvam AI. The startup raised a Series A investment of $41 million in December 2023, led by Lightspeed, and joined by Peak XV Partners and Khosla Ventures. This investment propelled Sarvam AI to being among India’s top AI startups, reinforcing its ability to construct population-scale AI solutions tailored to India’s linguistic diversity.

By 2024, Sarvam AI launched a full-stack Generative AI platform with five major products— Sarvam Agents, Sarvam 2B, Shuka 1.0, Sarvam Models, and A1. The products were created to promote the adoption of AI in Indian languages and increase accessibility across the nation.

The Rise of Indian Generative AI Companies

The emergence of Sarvam AI indicates a new era for AI development in the country. As an Indian generative AI company, Sarvam AI is set to play an important role in shaping the future of AI in India. Its effort to create India’s first homegrown generative AI pushes the country’s broad digital transformation goals and self-sufficiency in state-of-the-art technologies.

The generative AI sector is growing at a rapid rate globally, and Sarvam AI’s initiative positions it as a serious contender in India’s AI landscape. By focusing on local languages and cultural nuances, Sarvam AI is ensuring that the benefits of generative AI reach every corner of the country, from urban hubs to rural lands.

Sarvam AI Driving India’s Generative AI Revolution

Sarvam AI’s vision for the creation of India’s first homegrown generative AI reflects the growing ambition within the Indian technology sector. As an AI startup in India, it is setting an example for other companies, proving that world-class AI innovation can emerge from India.

By developing the open-source AI models in India, Sarvam AI is not only creating technology, but also strengthening the developer community and promoting the spirit of cooperation. Its journey confirms the ability of an Indian generative AI company to lead the next wave of AI-driven transformation, which makes AI accessible and relevant to every Indian in every language.

The post Indian Generative AI Company Sarvam AI Unveils Full-Stack GenAI Platform for Multilingual Users appeared first on Tech Research Online.

]]>
SoftBank-OpenAI Partnership Takes Shape with Acquisition of Sharp Plant in Japan https://techresearchonline.com/news/softbank-openai-ai-data-center-japan/ Fri, 14 Mar 2025 15:19:19 +0000 https://techresearchonline.com/?post_type=news&p=13864 SoftBank has started implementing its plans of setting up a key AI operation in Japan, TechCrunch has reported. Through a strategic SoftBank OpenAI Japan partnership, the company has paid $676 million to acquire a plant previously owned by electronics firm Sharp in Sakai city, Osaka. SoftBank Sharp plant acquisition includes land and buildings at the […]

The post SoftBank-OpenAI Partnership Takes Shape with Acquisition of Sharp Plant in Japan appeared first on Tech Research Online.

]]>
SoftBank has started implementing its plans of setting up a key AI operation in Japan, TechCrunch has reported. Through a strategic SoftBank OpenAI Japan partnership, the company has paid $676 million to acquire a plant previously owned by electronics firm Sharp in Sakai city, Osaka.

SoftBank Sharp plant acquisition includes land and buildings at the Sakai Plant in Osaka. The company plans to convert the factory into an AI data center.

An Early Step

The purchase of the factory is an important early step for SoftBank. AI data centers serve as critical linchpins in the huge generative AI boom that is currently sweeping through the tech world. Tech companies need huge data center capacity to train AI models and provision subsequent services.

In February this year, the Japanese company announced that the SoftBank OpenAI collaboration was aimed at deploying advanced enterprise AI known as Cristal Intelligence in the Asia country.

A statement released by the tech company showed that the purpose of the partnership was “to develop and market Advanced Enterprise AI called “Cristal intelligence.” Cristal intelligence will securely integrate the systems and data of individual enterprises in a way that is customized specifically for each company.”

SoftBank has also committed to make annual investments towards AI tools, including ChatGPT.

“SoftBank Group Corp. will spend $3 billion annually to deploy OpenAI’s solutions across its group companies, making it the first company in the world to integrate Cristal intelligence at scale, as well as deploying existing tools like ChatGPT Enterprise to employees across the entire group,” the statement added.

Launching AI Agents in Japan

OpenAI plans to bring its AI technology to the Japanese market by developing AI models at the Sakai plant. SoftBank and OpenAI set up a joint venture called SB OpenAI Japan, in which the companies hold equal shares.

The two companies will use the joint venture to train AI models using customer data acquired through marketing, human resources and other activities. SB OpenAI Japan will then develop and market customized AI agents to customers.

“The JV will serve as a springboard for introducing AI agents tailored to the unique needs of Japanese enterprises while setting a model for global adoption,” SoftBank mentioned in its statement back in February.

In the long-run, the two companies plan to commercialize AI agents developed by OpenAI in Japan, which will be the first globally. Development of AI bots that can handle advanced tasks will require SoftBank and OpenAI to learn from data. OpenAI will supply the graphics processing units required to develop and build AI models in Japan. The AI startup will likely procure the GPUs from Nvidia as well as the Stargate Project.

Considering that setting up the Sakai data center could require up to 100,000 GPUs, OpenAI’s Japan expansion could mean that the AI startup makes an investment of up to $6.7 billion.
SoftBank and OpenAI launched their joint venture soon after collaborating with Oracle and other companies to set up AI infrastructure under the Stargate Project in the US earlier this year. The Japanese tech giant has also invested $25 billion in OpenAI.

Largest Data Center

SoftBank’s AI investment at the Sakai data center shows how the two companies are expanding their collaboration scope. Once complete, the new facility will be among the largest data centers in Japan and the third for SoftBank. The Japanese tech giant plans to commence operations at the Sharp factory in 2026.

The company expects that by this time, the factory will have adequate power capacity to run the AI data center. Initially, the power capacity in the facility will be around 150 megawatts. SoftBank plans to increase it to over 240 megawatts over time. SoftBank will create and run its generative AI models from the Sakai facility. The tech giant already has an operational data center in Tokyo and is building another one in Hokkaido.

The post SoftBank-OpenAI Partnership Takes Shape with Acquisition of Sharp Plant in Japan appeared first on Tech Research Online.

]]>
Meta Unveils Custom AI Training Chip to Boost Machine Learning Capabilities https://techresearchonline.com/news/meta-ai-training-chip/ Wed, 12 Mar 2025 16:06:08 +0000 https://techresearchonline.com/?post_type=news&p=13842 Meta has officially joined the AI hardware race by testing its first in-house AI training chip, a significant milestone in the company’s overall artificial intelligence plans. According to Reuters, the social media tech made the move to reduce its reliance on external chipmakers like Nvidia. Moreover, Meta cut reliance on Bing and Google by launching […]

The post Meta Unveils Custom AI Training Chip to Boost Machine Learning Capabilities appeared first on Tech Research Online.

]]>
Meta has officially joined the AI hardware race by testing its first in-house AI training chip, a significant milestone in the company’s overall artificial intelligence plans. According to Reuters, the social media tech made the move to reduce its reliance on external chipmakers like Nvidia. Moreover, Meta cut reliance on Bing and Google by launching its own AI search engine.

The sources said that Meta has started a small deployment of the custom chips and has plans to increase the production for wider usage if the experiment goes well. The company aims to deduct its huge infrastructure costs as it aims to invest heavily in AI development. In January 2025, Meta’s profits surged as Zuckerberg announced the company’s AI strategy.

Meta’s Push for AI Hardware Innovation.

With artificial intelligence leading the charge in technological development, businesses are increasingly turning towards bespoke hardware to maximize performance and minimize dependence on third-party chip manufacturers. Meta AI Training Chip is intended to facilitate the firm’s expanding AI operations, specifically in training intricate machine learning models for various platforms.

Meta forecasted its 2024 expenses to be $ 114 billion to $119 billion, out of which $65 billion is to be invested in AI infrastructure development. Last month, according to revenue analysts, Meta platforms are expected to thrive in 2025 in comparison to other tech giants like Microsoft and Amazon.

This is part of an overall plan by Meta to increase its AI abilities, with a focus on content moderation, recommendation systems, and generative AI for the metaverse. With the development of a Meta custom AI chip, the company wishes to improve efficiency while reducing energy consumption and operating expenses.

The sources said that Meta’s training chip is a dedicated accelerator as it is designed to handle only AI specific tasks unlike the integrated Graphic processing units. This makes them more powerful than the chips that are used to manage the AI workloads.

The Road to Full Deployment

Although Meta custom AI chip development is at an initial stage, the firm has already begun to test its features. The executives of the company said that the aim of the company is to start using its own AI chips by 2026 for training and compute-intensive process of feeding huge data to the AI systems.

Last week, Meta’s Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference, “We’re working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI.”

Meta’s Step Ahead

Even with these developments, Meta is set to face strong competition from established players in AI chipmaking like Nvidia, Google, and AMD. These firms are already well-established in AI chip technology, which leaves one wondering how Meta’s bespoke AI chip will hold up in efficiency, scalability, and long-term.

Meta’s move to create an AI model training chip is a forward in AI hardware self-sufficiency. As AI technologies continue to get more complex, there will be an increased need for hardware solutions that are well-tuned for AI applications. Through the investment in Meta’s custom AI chip, the firm is establishing itself as a trailblazer in AI-driven technology.

The post Meta Unveils Custom AI Training Chip to Boost Machine Learning Capabilities appeared first on Tech Research Online.

]]>
The Rise of AI Agents: From Simple Automation to Intelligent Decision-Makers https://techresearchonline.com/blog/what-are-ai-agents/ Wed, 12 Mar 2025 15:07:45 +0000 https://techresearchonline.com/?post_type=blog&p=13824 Introduction Think of a world where the software is not only a collection of pre-programmed software, but also a smart system that learns, adapts and decides on its own. AI agents are bringing it to life! These AI agents are subsidiary assistants to virtual assistants that maximize sales from intelligent programs. By simplifying worldly works, […]

The post The Rise of AI Agents: From Simple Automation to Intelligent Decision-Makers appeared first on Tech Research Online.

]]>
Introduction

Think of a world where the software is not only a collection of pre-programmed software, but also a smart system that learns, adapts and decides on its own. AI agents are bringing it to life! These AI agents are subsidiary assistants to virtual assistants that maximize sales from intelligent programs. By simplifying worldly works, maximizing efficiency and increasing decision making, they are bringing revolution in the way companies operate.

AI and Robotics are changing the way of doing business, automating monotonous tasks and making better decisions in various industries. AI agents are different from traditional software as they are capable of working independently, learning from data and changing according to new situations without any human intervention. In this blog, we will discuss how AI agents work, their advantages and disadvantages with their development.

Evolution of AI Agents

Evolution of AI Agents

AI agents have developed greatly with basic rules-based systems, ranging from sophisticated autonomous systems to learning and adaptation. The initial AI agents acted on a certain command basis, reacted to pre-defined input with minimal flexibility. With the development of artificial intelligence and machine learning, modern AI agents now process large amounts of data, recognize patterns, and make real -time decisions.

Today’s AI agents integrate natural language processing, reinforcement learning and nerve network, allowing them to interact with humans, predict the outcomes and optimize operations in industries. As AI technology advances, these agents are becoming more autonomous, intelligent and necessary to run innovation in various fields.

How AI Agent works

AI agents operate by examining their surroundings and data to meet specific objectives. Unlike traditional software, which follows set instructions, agents for businesses utilize advanced technologies like machine learning, deep learning, and natural language processing to understand data and work towards their goals. These AI agents evolve by learning from their interactions that help them in becoming a decision-making support system for the users.

Adaptability makes AI agents essential across various sectors, from customer service robots to tackling intricate challenges in finance, healthcare, and cybersecurity.

Key components of how AI agents work:

  • Perception and Data Collection – AI agents collect data from various sources, including sensors, APIs, databases, and user input. This information is used as a basis for decision-making so that the agent knows its environment with certainty.
  • Processing and Decision-Making – After processing and data collection, AI agents process the information through machine learning algorithms, probability models and deep education. They analyze their potential tasks and respond the best according to the goal.
  • Learning and Adaptation – AI agents apply methods such as reinforcement and nervous networks for adaptation of their performance. They identify the pattern of better decision -making strategies and improve accuracy.
  • Action Execution – Depending on their purpose, AI agents manage their tasks such as providing answers, automation of professional processes, controlling or controlling physical objects such as robots and autonomous vehicles. They execute their functions in a way that reduces human labor and improves efficiency.
  • Feedback Loop – AI agents use feedback from their behavior and interactions to improve future performance. Through supervised learning, human input, or self-improvement, they become more efficient and smarter with time.

By integrating these capabilities, AI agents enhance efficiency, automate tasks, and drive intelligent decision-making across industries.

Types of AI Agents

Types of AI Agents

AI agents are classified according to their complexity, learning ability and autonomy. They can be simple as rules-based systems or can be complicated as advanced agents that learn and develop over time. The following are the primary types of AI agents, examples of the real world reflect their use.

1. Simple Reflex Agents

Simple reflex agents work by certain rules and only respond to the current situation without learning or remembering previous experiences. They perform well in predictive situations but are challenged with complex or dynamic conditions.

  • How They Work: They function on IF-THEN grounds – if a certain position is completed, then they take a related action.
  • Where They Are Used: Spam filtering, simple automation, smart thermostats.
  • Functionality: They are ideal for simple, repetitive functions but not too versatile.
  • Example: Spam email filters that label messages as spam based on keywords or sending behavior.

2. Model-Based Reflex Agents

Conversational AI agents use model-based reflex systems to realize context, remember previous interactions, and make decision-making choices. In contrast to basic reflex agents, they are able to cope with changing environments by examining prior conversations, guaranteeing more customized and context-oriented responses.

  • How They Work: They process the current input and refer to the stored knowledge of previous interactions to make educated decisions.
  • Where They Are Used: intelligent AI agent, self-driving robot, driverless cars.
  • Functionality: These agents are better in making adaptive decisions, and therefore, they are the best for memory and learning tasks.
  • Example: Customer service chatbots powered by AI that remember previous conversations to give context-aware answers.

3. Goal-Based Agents

The goal-based agents take AI a step forward by defining and analyzing various tasks to meet them. They apply planning strategies and search algorithms to decide the most appropriate action.

  • How They Work: They evaluate many possibilities, determine the results, and choose the best way to reach their goal.
  • Where They Are Used: autonomous vehicles, financial market prediction, strategic sports AI.
  • Functionality: Objective-operated agents improve decision making through emphasis on long-term goals rather than short-term responses.
  • Example: Analyzing real-time traffic information to conspiracy to conspiracy for self-driving cars such as Tesla autopilot, safe routes.

4. Utility-Based Agents

Utility-based agents are beyond the target determination by evaluating various results and providing a value to each. These intelligent agents in AI consider factors such as risks, costs and user preferences to determine the most efficient action.

  • How They Work: These agents assess many possible results and choose the action that maximizes the overall profit by balanced the business.
  • Where They Are Used: AI-Personal Assistant, Health Services Diagnosis System, Financial Decision Making.
  • Functionality: Utility-based agents optimize decision making by choosing the best possible results rather than achieving only one goal.
  • Example: Virtual assistants such as Google assistants and Siri, which provide personal recommendations based on user behavior and preferences.

5. Learning Agents

Learning agents continuously improve their decisions by applying machine learning. They learn from experience, change strategies through experiment, and increase performance on previous experiences.

  • How They Work: These agents inspect patterns, learn from data, and modify their reactions to be better over time.
  • Where They Are Used: Customized suggestions, identification of cyber security danger, prevention of fraud.
  • Functionality: Increasing intelligence over time makes the learning agents the best to use in cases that require non-stop adaptation and adaptation.
  • Example: Personalization software such as Amazon and Netflix that recommend products or content according to user activities.

Benefits of AI Agents

AI agents are bringing a revolution in industries by automating and streamlining processes and making better decisions. Their ability to work independently, learning from experience and user interactions, makes them an asset for individuals and businesses. Some of the major benefits of are as follows:

  • Automation of Repetitive Tasks
    AI agents do repetitive and regular functions at high speed and accuracy, eliminates manual labor and enable employees to focus on high-value activities.
  • Enhanced Decision-Making
    AI agents process large amounts of information and provide actionable intelligence, allowing businesses to make more accurate decisions based on data.
  • Improved Customer Experience
    Through insights into user behavior and preferences, intelligent systems offer customized interactions, resulting in enhanced customer engagement and satisfaction.
  • Cost Savings and Operational Efficiency
    AI agent’s lower operational costs by streamlining processes, reducing errors, and improving efficiency in various industries.
  • 24/7 Availability and Scalability
    Unlike human employees, agents can work round the clock, which can ensure uninterrupted service and efficient operation.

Challenges for AI Agents

While AI agents provide many benefits, they also face challenges that affect their effectiveness, deployment and long -term success. These challenges arise from technical, moral and operating limitations, which businesses must address to maximize AI’s ability.

1. Data Quality and Availability

AI agents rely on vast amounts of high-quality data to act accurately. Incomplete, biased, or older data can cause incorrect decisions and incredible results.

2. Ethical and Bias Concerns

AI agents can inherit the prejudices present in their training data which can lead to inappropriate or biased decisions. Addressing prejudice and ensuring moral AI use is a major challenge for businesses and developers.

3. Security and Privacy Risks

AI agents often process sensitive users and commercial data, making them an easy target for cyber hazards. Ensuring data privacy and securing the AI model against adverse attacks is important for safe deployment.

4. Interpretability and Transparency

Many AI models, especially deep learning-based agents, function as “black boxes”, making it difficult to understand how they derive a particular conclusion. This lack of transparency can reduce confidence and obstruct regulatory compliance.

5. Dependence on Human Oversight

Despite their autonomy, AI agents still require human monitoring and intervention to handle unexpected scenarios, errors, or ethical dilemmas. Finding the right balance between automation and human oversight remains a challenge.

Conclusion

AI agents are changing industries, but their increasing effect comes with risks. While business takes advantage of AI for automation and decision making, challenges such as data security and moral concerns should be addressed. The Dark Web has already seen AI-powered cyber threats, from deepfake scams to automatic hacking, proving that these technologies can be exploited.

The attention should not be on how to make the AI stronger but how to ensure that it’s deployed responsibly. Security, fairness, and transparency must guide the development of AI so that it’s not used incorrectly. Those organizations that find the right balance between risk avoidance and innovation will define the future of AI in a responsible way. The true question isn’t if AI agents will reshape industries, but whether we are ready to steer this change in the right direction—maximizing benefits while mitigating threats.

The post The Rise of AI Agents: From Simple Automation to Intelligent Decision-Makers appeared first on Tech Research Online.

]]>
DeepSeek AI Adoption in China Soars as Retail Investors Embrace Quant Trading https://techresearchonline.com/news/deepseek-ai-adoption-in-china/ Tue, 11 Mar 2025 16:17:24 +0000 https://techresearchonline.com/?post_type=news&p=13818 DeepSeek AI adoption in China has surged as Chinese investors embrace AI tools. According to Reuters, retail traders are flooding training rooms as they seek to leverage DeepSeek and other computer models to beat the market. Online AI crash courses have become common in the Asian country after DeepSeek changed the perception of the $700 […]

The post DeepSeek AI Adoption in China Soars as Retail Investors Embrace Quant Trading appeared first on Tech Research Online.

]]>
DeepSeek AI adoption in China has surged as Chinese investors embrace AI tools. According to Reuters, retail traders are flooding training rooms as they seek to leverage DeepSeek and other computer models to beat the market. Online AI crash courses have become common in the Asian country after DeepSeek changed the perception of the $700 billion Chinese hedge fund industry.

A Radical Shift

Today, individual Chinese investors are learning to trade with AI. This is a stark contrast of the public outcry that rocked the country last year as investors protested against quant trading with AI. Retail investors viewed computer-driven quant funds negatively and even blamed regulators for contributing to market volatility and unfairness.

The Chinese government unleashed crackdowns on the $260 billion valued industry last year. Things have changed this year with each investor spending about $2,179.91 to learn how to trade stocks with AI in a weekend lecture delivered by Alpha Squared Capital founder, Mao Yuchun.

But rapid DeepSeek AI adoption in China’s stock market is also causing a shift in the way wealth managers and brokerage firms operate. These changes present new risks for retail investors using DeepSeek AI in the market that is driven and dominated by cash-flow from small-time traders.

The DeepSeek Advantage

DeepSeek has gained popularity among Chinese investors due to its strong reasoning, availability, and cost-effectiveness. The Chinese government has also been promoting the AI model.

“In the future, Chinese investors will completely change the way they make investment decisions and place orders. Previously, clients would ask wealth managers for investment advice. Now they ask DeepSeek,” Xiangcai Securities President Zhou Lefeng said.

But even with these advantages, analysts are concerned by how much investors trust the AI model. They caution that as an AI model, DeepSeek has some limitations.

“People trust AI models more than they trust financial advisers, which is probably misplaced trust at least at this stage. Large language models seem impressive. But at this stage, they are not necessarily smarter than most investors,” FinAI Research Analyst Larry Cao said.

Analysts have also warned of herding effects. This could happen if a single school trains a large number of investors to use the same signal to trade. Irrespective of these risks, DeepSeek has caused significant change in the perception of retail investors towards quant fund managers

“I can feel strongly that the public are thinking twice about quant fund managers’ contributions to society. I never think we caused retail investors’ losses. We actually provide liquidity and make the market more efficient,” Baiont Quant CEO Feng Ji said.

Baiont Quant is one of the Chinese quant fund managers that leverage machine learning to trade.

Tapping Social Media

Chinese social media platforms have also been flooded with online courses designed to help traders learn how to use DeepSeek to pick stocks, evaluate companies, and develop trading codes.

“Using quantitative tools to pick stocks saves a lot of time. You can also use DeepSeek to write codes,”
Hangzhou-based trader Wen Hao said. Hao uses computer programs to identify the best time to buy and sell stock.

American hedge funds like Renaissance Technologies, BlackRock, and Two Sigma have been leveraging AI in investings. According to analysts, Chinese retail investors and smaller asset managers can benefit immensely by utilizing DeepSeek’s open-sourced AI models.

DeepSeek adoption in quant trading coincides with the positive start for stocks following several years of poor performance. According to Goldman Sachs, China’s MSCI Index registered its best start in history at the beginning of this year even as stock brokers focus on setting up AI models on their trading platforms.

The post DeepSeek AI Adoption in China Soars as Retail Investors Embrace Quant Trading appeared first on Tech Research Online.

]]>
XPeng Plans a $13.8 Billion Humanoid Robot Investment as the Robotic Race Shapes Up https://techresearchonline.com/news/xpeng-humanoid-robot-investment/ Tue, 11 Mar 2025 16:09:44 +0000 https://techresearchonline.com/?post_type=news&p=13815 Chinese EV manufacturer XPeng is considering making a significant investment in humanoid robots as it views its project as a long-term venture. XPeng’s humanoid robot investment could be as high as $13.8 billion, Reuters reported. Industry Potential XPeng CEO He Xiaopeng reportedly revealed the investment plan during a parliamentary secession. On March 10, Chinese state […]

The post XPeng Plans a $13.8 Billion Humanoid Robot Investment as the Robotic Race Shapes Up appeared first on Tech Research Online.

]]>
Chinese EV manufacturer XPeng is considering making a significant investment in humanoid robots as it views its project as a long-term venture. XPeng’s humanoid robot investment could be as high as $13.8 billion, Reuters reported.

Industry Potential

XPeng CEO He Xiaopeng reportedly revealed the investment plan during a parliamentary secession. On March 10, Chinese state media reported that while Xiaopeng acknowledged that while his company’s current investment remains conservative, XPeng is ready to beef it up substantially.

“XPeng has been working in the humanoid robot industry for five years, may continue to be in the business for another 20 years, invest additional 50 billion yuan and even 100 billion yuan,” Xiaopeng said.

The CEO did not disclose XPeng’s current investment in robots. The EV manufacturer entered the humanoid robots market back in 2020. The company launched the humanoid iron to rival Tesla’s Optimus bot in November the same year.

Earlier this month, XPeng reported growth in its EV deliveries for the month of February 2025 as some Chinese EV companies struggled to sell. The company shipped over 30,000 cars for the fourth straight as its mass-market brand gave it an edge in the now competitive market.

XPeng is one of the auto manufacturers that have ventured into humanoids. Chinese lawmakers have identified robotics as one of the industries that the country wants to achieve tech breakthroughs.

Growing Interest

Interest in humanoid robots has been growing among tech and automotive companies. XPeng’s participation in the humanoid robot market marks a significant step towards robotic innovation and development of emerging technologies at a lower cost. Ideally, having more players in the industry will most likely lead to a more efficient market and competitive product pricing.

But XPeng will have to invest heavily in research, development, and real-time testing of its robots in consumer and industrial environments. Another Chinese EV manufacturer that is eying robotics is Leapmotor. The company has already set up a robotics team. The CEO of the Stellantis NV supported company Zhu Jiangming said the company is currently at the pre-research stage.

The robots are to be used in industry settings to replace humans. Some areas where they would be useful include factory assemblies where they could help to improve work efficiency. According to Jiangming, auto manufacturers could invest between 1 and 2 billion yuan each year in developing applicable scenarios for humanoid robot deployment.

Robotics Race

As the global robotics race shapes up, the future of humanoid robots remains bright. EV manufacturers across the board are already planning for widespread robot deployment in their manufacturing plants. Early this year, Tesla CEO Elon Musk said his company is planning to make several thousands of Tesla Optimus humanoid robots for deployment in its factory. Last year, Musk had alluded to these plans on social media.

“Tesla will have genuinely useful humanoid robots in low production for Tesla internal use next year and, hopefully, high production for other companies in 2026,” Musk posted on X in July 2024.

According to Musk, the internal deployment will inform the company’s next version of Optimus that will be launched in 2026. Much also said the new version will most likely be sold to Tesla rivals. This move points to emerging competition in the humanoid robot market.

American big techs Apple and Meta are also planning to enter the humanoid robot market. Reports indicate that Apple is looking into non-humanoid and humanoid robots to support the smart home ecosystem.

The iPhone maker will be focusing more on sensing technologies and user interaction, which indicates an emerging trend where big techs will integrate robotics into tech products. Meta has shown interest in AI-powered humanoid robots. The tech giant plans to invest heavily in this field.

The post XPeng Plans a $13.8 Billion Humanoid Robot Investment as the Robotic Race Shapes Up appeared first on Tech Research Online.

]]>
Foxconn Unveils Its First Large Language Model to Drive AI Innovation https://techresearchonline.com/news/foxconn-large-language-model/ Mon, 10 Mar 2025 15:52:08 +0000 https://techresearchonline.com/?post_type=news&p=13776 Foxconn, the world’s biggest electronics maker, unveiled its first large language model on Monday, 10th March, 2025. According to Reuters, the company said that it has launched the model to improve manufacturing and supply chain management. Foxconn said in a statement, “The model named ‘Fox Brain’ was trained using 120 of Nvidia’s H100 GPUs and […]

The post Foxconn Unveils Its First Large Language Model to Drive AI Innovation appeared first on Tech Research Online.

]]>
Foxconn, the world’s biggest electronics maker, unveiled its first large language model on Monday, 10th March, 2025. According to Reuters, the company said that it has launched the model to improve manufacturing and supply chain management. Foxconn said in a statement, “The model named ‘Fox Brain’ was trained using 120 of Nvidia’s H100 GPUs and completed in about four weeks.”

The announcement reflects the company’s further move into AI innovation, especially improving industrial automation and smart manufacturing. Foxconn, also known as Hon Hai, is the assembler of Apple’s iPhone and the maker of Nvidia’s AI servers. In January 2025, Hon Hai saw a surge in its stock prices due to high growth of AI in recent times

One of the most impressive capabilities of the Foxconn Large Language Model is its strong support for traditional Chinese and Taiwanese languages in AI. The company said that their new large language model is based on Meta Llama 3.1 architecture. Foxconn initially designed Foxbrain for internal application and covers data analysis, decision support, document collaboration, mathematics, reasoning and problem-solving, and code generation.

Foxconn Collaboration with AI and Tech Giants

The success of the Foxconn’s Large Language model is not only the result of internal innovation, but also a product of strategic collaborations with major AI research institutes. The company has allegedly participated with AI firms and cloud computing providers. In October 2024, Foxconn collaborated with Nvidia to open the world’s largest superchip factory in Mexico for the bundling of GB 200 chips

This collaboration allows Foxconn to refine its language model, ensuring that it meets the highest industry standards in accuracy. By working with AI experts, the company can accelerate its AI initiative and integrate its language model in smart manufacturing processes, enterprise solutions and even consumer applications.

The Future of Foxconn’s AI Endeavors

The introduction of Foxconn’s Large Language Model marks the beginning of an exciting era to the company’s AI ambitions. As the tech giant continues to expand its AI abilities, we can expect further progress in machine learning, natural language processing and automation technologies.

Hon Hai’s focus on traditional Chinese and Taiwanese language in AI gives it a competitive edge in the regional markets. Foxconn’s AI initiative ensures that AI remains a main component of its commercial strategy. Through collaboration with global AI leaders, the company has been well deployed to run innovation and shape the future of manufacturing and further artificial intelligence.

The post Foxconn Unveils Its First Large Language Model to Drive AI Innovation appeared first on Tech Research Online.

]]>
DeepSeek vs ChatGPT: Which AI Model Delivers the Best Performance? https://techresearchonline.com/blog/deepseek-vs-chatgpt-comparison/ Fri, 07 Mar 2025 15:54:09 +0000 https://techresearchonline.com/?post_type=blog&p=13731 Introduction What if we tell you that the AI you choose could redefine your productivity and decision-making? In today’s dynamic AI landscape, choosing the right tool isn’t just about features—it’s about finding the perfect fit for your business goals. DeepSeek and ChatGPT are two advanced AI models built on cutting-edge natural language processing (NLP) and […]

The post DeepSeek vs ChatGPT: Which AI Model Delivers the Best Performance? appeared first on Tech Research Online.

]]>
Introduction

What if we tell you that the AI you choose could redefine your productivity and decision-making? In today’s dynamic AI landscape, choosing the right tool isn’t just about features—it’s about finding the perfect fit for your business goals.

DeepSeek and ChatGPT are two advanced AI models built on cutting-edge natural language processing (NLP) and deep learning. Deepseek uses retrieval-augmented generation (RAG) to provide real-time information. It also proves to be very competent in research and technical use. It is helpful in subjects such as finance & banking, law and data science, where domain expertise and accuracy matter the most. On the other hand, ChatGPT, operated by OpenAI’s transformer-based large language model (LLMS), specializes in natural interactions and creative output. These capabilities make it an effective toolfor marketing, customer aid and general problem-solution.

Deepsek vs ChatGPT decision affects productivity, efficiency and decision making in many departments. Through this blog post, we dive deep into their functionalities so that you can decide which AI tool is best for your workflow and company goals.

DeepSeek vs ChatGPT: Key Differences You Need to Know

AI-powered language models are transforming how we interact with technology, but choosing the right one depends on your specific needs. DeepSeek and ChatGPT stand out as two advanced AI models, each with unique architectures and capabilities. Let’s break down their key differences.

Model Architecture

Model Architecture

  • DeepSeek: The RAG framework of DeepSeek allows it to retrieve outside data which maintains accurate facts as well as provides immediate relevance. The tool functions superbly when used in fields that need precise information such as research work and data analytics and academic studies.
  • ChatGPT: ChatGPT works on transformer technology to produce outputs through analysis of pre-train datasets. The system demonstrates strong abilities to understand natural language while providing deep conversations which allow it to handle general tasks spanning from creative writing to customer service interactions.
  • Key Difference: DeepSeek delivers improved responses through continuous access to real-time data while ChatGPT generates interactive responses by relying on its extensive pre-installed knowledge database. DeepSeek provides accurate information from up-to-date sources, but ChatGPT delivers better interactive dialogue.

Performance Strengths

Performance Strengths

  • DeepSeek: DeepSeek operates through a research foundation to generate accurate responses thus making it appropriate for technical disciplines such as law and medicine and financial institutions. The platform enables accurate answers by linking with established data repositories.
  • ChatGPT: Excels in natural conversation, creative writing, and adaptive dialogue generation. This tool finds extensive use in customer service interactions alongside its role in creative brainstorming sessions and storytelling and content development applications which need to keep dialogue both flowing and logical.
  • Key Difference: DeepSeek serves professionals through its ability to give precise technical details with a focus on factual accuracy. ChatGPT achieves superior performance in flexible conversations and creative output which makes it suitable for understanding marketing trends and customer service work alongside content generation initiatives.

Accessibility and Cost

Accessibility and Cost

  • DeepSeek: DeepSeek operates primarily for enterprise-level users as well as researchers while its availability depends on industry-specific partnership requirements and integrations with enterprise systems. The prices for these systems differ significantly based on whether they need customized enterprise installations.
  • ChatGPT: The pricing models of ChatGPT implement three levels: users can access free content alongside purchasing ChatGPT Plus or acquiring enterprise-level service. Users at all levels including private consumers and corporate operators find this technology affordable. Through its web app and API availability ChatGPT provides simple access for developers to integrate it seamlessly into various business processes.
  • Key Difference: ChatGPT presents itself to users through various pricing plans allowing casual and professional audiences to access it. The main benefit of DeepSeek in enterprise solutions is its specialized approach but this restricts personal user access thereby enabling better scalability for industry requirements.

Customization and Ease of Use

Customization and Ease of Use

  • DeepSeek: The DeepSeek platform enables organizations to customize their AI models specifically for domain applications, but it works best for businesses needing specialized domain-specific models. Businesses can add their proprietary knowledge bases and data through DeepSeek which makes it suitable for healthcare and research and technical support fields.
  • ChatGPT: The user-friendly interface and accessible API and flexible customization features of ChatGPT make it suitable for deployment across different situations. Companies can make modifications to the system using tools designed for behavior customization while bypassing the need for specialized technical skills.
  • Key Difference: DeepSeek needs specialized technical experience to customize its operations yet provides enhanced accuracy when applied to complex specialty areas. ChatGPT offers broad accessibility and easy integration which allows it to function as a basic solution for general applications because of its user-friendly design.

Development Philosophy

Development Philosophy

  • DeepSeek: The DeepSeek platform enables organizations to customize their AI models specifically for domain applications, but it works best for businesses needing specialized domain-specific models. Businesses can add their proprietary knowledge bases and data through DeepSeek which makes it suitable for healthcare and research and technical support fields.
  • ChatGPT: ChatGPT delivers user-friendly interfaces along with API accessibility and adaptable fine-tuning options that make it easier to deploy in multiple applications. Companies can make modifications to the system using tools designed for behavior customization while bypassing the need for specialized technical skills.
  • Key Difference: DeepSeek needs specialized technical experience to customize its operations yet provides enhanced accuracy when applied to complex specialty areas. ChatGPT offers broad accessibility and easy integration which allows it to function as a basic solution for general applications because of its user-friendly design.

Deepseek vs ChatGPT: Differences between the technical features

Criteria

DeepSeek

ChatGPT

Model Type Retrieval-Augmented Generation (RAG) with external data retrieval Transformer-based Large Language Model (LLM) with extensive pre-trained data
Pricing Primarily enterprise-level, pricing varies based on custom integration Free tier available; ChatGPT Plus ($20/month); Enterprise pricing for businesses
Hosting Mostly cloud-based, requires enterprise integration Cloud-hosted with web, mobile, and API access
Coding Capabilities Strong at retrieving relevant code snippets and technical documentation Excels in code generation, debugging, and explanations
Mathematical Reasoning Advanced, as it fetches accurate real-time mathematical data Good but relies on pre-trained knowledge, sometimes lacks real-time accuracy
Conversational Style Formal, structured, research-focused Engaging, natural, adaptive to user tone
Writing & Creativity Best suited for technical writing and factual accuracy Excels in storytelling, marketing content, and creative writing
Context Retention Retains information effectively for long research-based interactions Strong context retention within conversations, but limited memory across sessions
Research & Data Analysis Superior in research-heavy fields due to real-time retrieval of facts Provides insightful summaries and analysis but lacks real-time data access
Language Support Supports multiple languages, optimized for technical documentation Multilingual with strong natural language understanding in various languages
Privacy & Security Focuses on enterprise-grade data security and compliance OpenAI provides standard encryption and privacy measures, with enterprise security options
API Access Available but mainly for enterprise and technical users Robust API access for developers, with scalable pricing plans
Enterprise Solution Designed for industry-specific AI applications and deep research Offers enterprise plans with customization and priority support
Ease of Use Requires technical expertise for optimal customization User-friendly interface, accessible for general users without technical expertise

ChatGPT vs DeepSeek: Use Cases Across Industries

AI-driven language models like ChatGPT and DeepSeek are revolutionizing how businesses and individuals approach tasks in content creation, coding, research, and data analysis. While both models leverage advanced natural language processing (NLP) and machine learning, their applications vary based on their unique architectures. DeepSeek’s retrieval-augmented generation (RAG) framework enhances its effectiveness in handling research-heavy tasks, , whereas ChatGPT’s transformer-based model excels in conversational AI, creative content, and coding assistance. Let’s explore how each AI model performs across different use cases.

Use Cases for DeepSeek

  • Content Writing: DeepSeek excels at generating research-based content along with information for articles in technical manuals and academic papers. The system retrieves current information and validates it for maintaining accuracy in publications.
  • Coding and IT: Data-focused DeepSeek excels in the coding field because its architecture helps automate bug detection while pulling needed code excerpts from documentation and adding real-world samples to tech-related inquiries.
  • Research and Learning: The research and learning application of DeepSeek brings together academic research tools with enterprise research capabilities which link to scientific databases while providing financial reports and legal resources for up-to-date findings.
  • Data/Financial Analysis: The data-processing capabilities of DeepSeek function optimally with structured information for financial prediction as well as stock market analytical purposes and business data analysis where accurate and real-time updates hold priority.

Use Cases for ChatGPT

  • Content Writing: ChatGPT’s text generation capabilities support multiple content-writing tasks, including blogging, marketing content creation, social media posting, promotional ad copy development, and storytelling. The application produces text with human characteristics that works well when creating compelling persuasive content.
  • Coding and IT: IT developers use ChatGPT to produce code while optimizing their programs as well as to explain technical concepts while solving typical coding difficulties. The program proves highly effective when students need to learn new coding languages as well as solve programming problems.
  • Research and Learning: As an interactive learning AI system ChatGPT functions as an educational tool to help students comprehend advanced subjects while it summarizes academic texts and shows systematic examples for difficult problems. The system has wide adoption for preparing for exams and online learning and training courses.
  • Data/Financial Analysis: As an interactive learning AI system ChatGPT functions as an educational tool to help students comprehend advanced subjects while it summarizes academic texts and shows systematic examples for difficult problems. The system has wide adoption for preparing for exams and online learning and training courses.

Both models serve distinct yet overlapping purposes, catering to different industries and user needs. Whether you prioritize fact-based retrieval (DeepSeek) or dynamic conversation and creativity (ChatGPT), understanding their strengths will help you choose the right AI for your workflow.

The Right AI for the Right Job

The future of Artificial Intelligence is shaping every aspect of data creation and programming and research activities and analytical processes. Your decision between ChatGPT and DeepSeek should depend on your requirements rather than seeking a superior model. DeepSeek stands out as the ideal tool for users who prioritize immediate accurate results. The ability of ChatGPT to adapt between conversations makes it an excellent choice for those who seek flexible performance improvements during their workday.

AI potential arises from seamless application of appropriate technology solutions whenever needed by professionals in their work. The successful integration between DeepSeek and ChatGPT by businesses and professionals allows them to gain strong advantages in productivity and efficiency alongside innovative capabilities. The future development of AI systems will combine DeepSeek’s exact approach with ChatGPT’s interpersonal dialogue capabilities. Securing a complete understanding of the distinct features between DeepSeek and ChatGPT provides you with better efficiency and allows you to reconsider AI’s role in the workplace of the future.

FAQ: DeepSeek vs ChatGPT

1. What is the main difference between DeepSeek and ChatGPT?

DeepSeek retrieves real-time data for accuracy, while ChatGPT generates responses based on pre-trained knowledge, making it better for conversations and creativity.

2. Which AI provides more accurate information?

DeepSeek, as it fetches real-time data, whereas ChatGPT relies on pre-existing knowledge..

3. Which is better for business use?

DeepSeek suits research-heavy industries; ChatGPT is ideal for customer service, marketing, and general business tasks..

4. How do their pricing models compare?

DeepSeek is mainly enterprise-based; ChatGPT has a free tier and paid plans starting at $20/month.

The post DeepSeek vs ChatGPT: Which AI Model Delivers the Best Performance? appeared first on Tech Research Online.

]]>
KraneShares Gets Direct AI Stake as Anthropic’s Valuation Hits $61.5 Billion https://techresearchonline.com/news/kraneshares-anthropic-ai-investment/ Tue, 04 Mar 2025 17:11:21 +0000 https://techresearchonline.com/?post_type=news&p=13618 Anthropic has closed a $3.5 billion funding round at a $61.5 billion valuation, Yahoo Finance has reported. The high Anthropic valuation includes the new funding. It comes from the fast growth that the AI company has recorded. In 2024, Anthropic reported $1 billion in annual revenue run rate. This number has risen by about 30% […]

The post KraneShares Gets Direct AI Stake as Anthropic’s Valuation Hits $61.5 Billion appeared first on Tech Research Online.

]]>
Anthropic has closed a $3.5 billion funding round at a $61.5 billion valuation, Yahoo Finance has reported. The high Anthropic valuation includes the new funding. It comes from the fast growth that the AI company has recorded. In 2024, Anthropic reported $1 billion in annual revenue run rate. This number has risen by about 30% this year.

KraneShares AI Strategy

The new Anthropic funding round saw KraneShares-owned exchange traded funds get a stake in the OpenAI rival. Expansion of KraneShares AI and Technology ETF (AGIX) to include a direct stake in Anthropic reflects a new dimension for individual investors that seek to pursue investment in high-value private companies.

AGIX becomes the second-largest listed ETF to hold direct private equity securities in the US. The first was Alger AI Enablers & Adopters ETF that holds a direct stake in SB Technology. At the moment, less than 5% of its AI investments have been placed in Anthropic.

AGIX cannot increase its AI investment portfolio to more than 15% due to restrictions provided by rules that govern how much open-ended trading entities can invest in illiquid assets.Most funds make unlisted investment indirectly through special purpose vehicles or replication strategies. This option is informed by the large liquidity gap that exists between private assets whose shares rarely change hands and publicly traded ETFs.

KraneShares addressed the liquidity challenge between AGIX shares and Anthropic stake by forming an internal fair-market-value committee that will determine asset value daily. Bloomberg Intelligence Analyst David Cohne says direct investments have the potential to reduce costs and offer more transparency.

“We’ll continue to see more ETFs investing in private assets, whether that’s through a direct investment or a special purpose vehicle — there’s certainly an appetite for it,” Cohne said.

Anthropic Expansion Plans

The recent Anthropic investment is expected to spur competition with OpenAI. The latter is already talking to investors to increase its valuation to $300 billion. These deals reflect the enthusiasm among tech investors to channel huge sums of money into top AI firms despite the recent emergence of smaller firms like DeepSeek.

Anthropic will use the new capital to create the next generation of its AI models. The company will also spend the funding to advance research and expand its computing capacity. The company has plans to fast track its expansion efforts to Europe and Asia.

“This investment fuels our development of more intelligent and capable AI systems that expand what humans can achieve, while deepening our understanding of how these systems work. These capabilities are driving remarkable outcomes for our customers as our business and consumer usage continue to grow rapidly,” Anthropic CFO Krishna Rao said in a statement.

The recent funding comes months after two big techs that have previously invested in Anthropic made additional investments. Google announced a $1 billion investment in the company in January this year while Amazon.com made a $4 billion investment in November 2024. Anthropic is powering Amazon’s AI-enabled Alexa assistant that the e-commerce giant launched recently.

AI Industry Outlook

Anthropic unveiled an advanced version of its AI model, Sonnet. It also launched a new AI agent that’s capable of automating software coding tasks. Anthropic was founded by previous OpenAI employees in 2021.

Since then, the AI company has positioned itself as a safety-conscious and reliable brand that users can trust. Anthropic had planned to raise $2 billion in the recent financing round. It surpassed its target after the round was oversubscribed.

The AI company faces stiff competition from big players in the AI industry like OpenAI, Meta, and Google. As the competition heats up, its ability to differentiate itself through enterprise adoption, safety, and reasoning capabilities will shape its success in the long run.

Lightspeed Venture Partners led the latest funding round. Other tech investors that participated in the Series E funding round were Bessemer Venture Partners, Menlo Ventures, and General Catalyst.

The post KraneShares Gets Direct AI Stake as Anthropic’s Valuation Hits $61.5 Billion appeared first on Tech Research Online.

]]>
MIPS Makes Strategic Shift in Robotics, Focuses on Chips that Support Sensing and Control https://techresearchonline.com/news/mips-robotics-ai-chip-innovation/ Tue, 04 Mar 2025 17:07:24 +0000 https://techresearchonline.com/?post_type=news&p=13617 MIPS has announced a shift in robotics, Reuters reported. The tech company, which has been in existence for several decades once competed with Arm Holdings in providing computer architectures. MIPS strategic shift in robotics will see the company develop MIPS Atlas portfolio, a chips suite for AI-enabled robots. MIPS Chip Advancements MIPS is a global […]

The post MIPS Makes Strategic Shift in Robotics, Focuses on Chips that Support Sensing and Control appeared first on Tech Research Online.

]]>
MIPS has announced a shift in robotics, Reuters reported. The tech company, which has been in existence for several decades once competed with Arm Holdings in providing computer architectures. MIPS strategic shift in robotics will see the company develop MIPS Atlas portfolio, a chips suite for AI-enabled robots.

MIPS Chip Advancements

MIPS is a global leader in the supply of compute subsystems for autonomous vehicle platforms. The company has been manufacturing chips that have high processing speeds. For a long time, MIPS chips have been considered ideal for specialized apps like self-driving vehicles and networking gear.

On March 4, MIPS announced a shift in its strategy, saying it will now design its own chips. The company also said it will license its technology. MIPS’ strategy shift will see it focus more on three key aspects of robotics. MIPS next-gen robotics chips will be designed to support sensing as well as chips that can control robot actuators and motors. It will also design chips that can calculate actions that robots should take next.

The company, through its CEO Sameer Wasson said it expects these markets to grow as new areas of AI advancement applications like humanoid robots emerge. MIPS holds that to attract business, it’s better to showcase a working chip than a presentation- even when in pursuit of a licensing deal. At the initial stages, MIPS’ focus will be on the automotive industry.

“It doesn’t mean MIPS is going to overnight turn into a silicon company. I don’t see that. But I think we’ve got to give the ecosystem confidence that this can be done. I expect this technology to be in a car towards the end of ’27 and start to hit volume in the ’28 timeframe,” Wasson said.

Physical AI at the Edge

The company also unveiled the MIPS Atlas portfolio, a new product that will allow industrial, automotive, and embedded tech companies to roll out efficient, safe, and secure physical AI at the edge.

By merging high-performing real-time computing with edge deployment of generative AI models and functional safety, MIPS chip product suite will facilitate the development of next-gen autonomous platforms.

“The need for efficient autonomous platforms to advance next-generation driverless vehicles, factory automation, and many other applications is directly aligned with the MIPS Atlas portfolio. Our core competencies of safety, efficient data processing and experience in autonomy have enabled us to expand our portfolio with real-time intelligence that is the essential tech stack for Physical AI platforms. MIPS customers can take our compute subsystems with software stacks as a turnkey solution to build physical AI platforms,” Wasson said.

MIPS has designed its Atlas portfolio around three computing categories that constitute physical AI. These are Sense, Think, and Act. Robots use a wide range of sensors to interpret their surroundings. They generate data that is transferred, integrated, and processed quickly and in real time. The AI engine on the Physical AI platforms processes the data to facilitate quick decision-making for precise action.

“By integrating safety, efficiency and cutting-edge intelligence, MIPS is well-positioned to accelerate innovation across the rapidly expanding $1 trillion Physical AI market,” HyperFRAME Research CEO Steven Dickens said.

A Long History

MIPS has had a long history in computing innovation and safety processing. The company was co-founded by Stanford University Professor John Hennessey in the mid-1980s to commercialize computing architecture by innovating agile ways to perform computing tasks.

Since then, the tech company has traded through different owners before it licensed use of some of its technologies in China and became bankrupt. In 2021, the company emerged from bankruptcy and said it would use a computing infrastructure called RISC-V.

The computing company now serves autonomous driving companies like Mobileye. MIPS has sold intellectual property rights to other companies that have used them to develop full-fledged chips.

The post MIPS Makes Strategic Shift in Robotics, Focuses on Chips that Support Sensing and Control appeared first on Tech Research Online.

]]>