DeepSeek: The Rising Star In Open-Source AI Goes Down From Malicious Attack

DeepSeek: The Rising Star In Open-Source AI Goes Down From Malicious Attack

DeepSeek: The Rising Star In Open-Source AI Goes Down From Malicious Attack
DeepSeek: The Rising Star In Open-Source AI Goes Down From Malicious Attack

DeepSeek, a rising star in advanced AI solutions, has recently experienced a series of large-scale malicious attacks on its online services Monday following media buzz about the AI platform.

“DeepSeek’s online services have recently faced large-scale malicious attacks. To ensure continued service, registration is temporarily limited to +86 phone numbers. Existing users can log in as usual. Thanks for your understanding and support,” said the company on their website.

A recent breakthrough by the Chinese AI startup sent ripples of concern through the US tech industry, triggering a sharp drop in stock prices Monday morning.

READ: Hedera Price Predictions: Can HBAR Hit New Highs In 2025?

The company’s unveiling of R1, a ChatGPT-like AI model operating at a fraction of the cost of its American counterparts, has challenged the perceived dominance of US tech giants in the AI arena.  

DeepSeek claims to have trained its latest AI model for a mere $5.6 million, a stark contrast to the hundreds of millions or even billions of dollars invested by US companies like OpenAI, Google, and Meta.

This remarkably low cost, first reported by the Wall Street Journal, has sparked significant market volatility, particularly within the tech sector.

The S&P 500 fell by 1.4%, and the tech-heavy Nasdaq plunged by 2.3%. Nvidia, a leading supplier of AI chips and a major beneficiary of the AI boom, saw its stock plummet by 12%.

Meta and Alphabet also experienced sharp declines, along with other chip manufacturers and data center companies. This downturn subsequently impacted the broader stock market, given tech’s substantial weight within indices like the S&P 500.  

Developments and Features of DeepSeek:

  • DeepSeek-V3: This Mixture-of-Experts (MoE) model boasts 671 billion total parameters, with 37 billion activated for each token. It utilizes Multi-head Latent Attention (MLA) and DeepSeekMoE architectures for efficient inference and cost-effective training. Notably, DeepSeek-V3 employs an auxiliary-loss-free strategy for load balancing and a multi-token prediction training objective for enhanced performance. Pre-trained on a massive 14.8 trillion tokens, followed by Supervised Fine-Tuning and Reinforcement Learning, DeepSeek-V3 rivals leading closed-source models in performance while maintaining remarkable training stability.  
  • DeepSeek Coder: This series of code LLMs is trained from scratch on a mix of code (87%) and natural language (13%) in both English and Chinese. With models ranging from 1 billion to 33 billion parameters, DeepSeek Coder demonstrates strong capabilities in code generation and understanding.  
  • Open-Source Philosophy: DeepSeek’s dedication to open-source development fosters collaboration and accelerates innovation within the AI community. By making its models and research publicly available, DeepSeek contributes to the democratization of advanced AI technology.  

Please make a small donation to the Tampa Free Press to help sustain independent journalism. Your contribution enables us to continue delivering high-quality, local, and national news coverage.

Connect with us: Follow the Tampa Free Press on Facebook and Twitter for breaking news and updates.

Sign up: Subscribe to our free newsletter for a curated selection of top stories delivered straight to your inbox.

Login To Facebook To Comment