China just won the AI wars. Wall Street panics!

Homepage Forums Science China just won the AI wars. Wall Street panics!

This topic contains 14 replies, has 4 voices, and was last updated by  Unseen 1 week, 6 days ago.

Viewing 15 posts - 1 through 15 (of 15 total)
  • Author
    Posts
  • #56001

    Unseen
    Participant

    Try it out and tell us if you’re impressed.

    • This topic was modified 2 weeks, 1 day ago by  Unseen.
    #56004

    _Robert_
    Participant

    I have been selling stocks and buying income generating bonds lately. My risk tolerance is inversely proportional to my age, LOL.

    Is it just a little ironic that China is creating efficient AI that will eventually destroy employment as they know it.

    #56007

    Unseen
    Participant

    In case anyone wants to know what happened in a nutshell. American tech giants have been working on AI in the hopes of monetizing it and dominating the market, generating monstrous amounts of money. I’m talking about Google (Gemini AI) , ChatGPT (OpenAI backed by Musk and others), plus several other major players and numerous minor ones. You can be sure that Microsoft, Amazon, IBM, NVidia and others have been hard at work hoping to dominate AI. The problem for those businesses is that much of DeepSeek is open source, free to use, and requires no licensing. Use it as is or, if you can build it better, use it as a starting point and build it better.

    Along comes DeepSeek literally out of left field, from China, and it’s as good as or better than a lot of the AI being developed in the USA by for profit companies. And, a lot cheaper. Here is how my Brave Browser AI summarized the price difference:

    DeepSeek Cost vs Competitors

    DeepSeek’s cost is significantly lower compared to its competitors. The model’s API pricing is around $0.55 per million input tokens and $2.19 per million output tokens. In contrast, OpenAI’s API costs $15 and $60, respectively, for the same number of tokens. This makes DeepSeek’s models more accessible to smaller businesses and developers who may not have the resources to invest in expensive proprietary solutions.

    DeepSeek: Costs $0.55 per million input tokens and $2.19 per million output tokens

    OpenAI: Costs $15 per million input tokens and $60 per million output tokens

    ChatGPT: Has a higher development cost, with its subscription model services requiring users to pay more

    DeepSeek’s cost-efficient performance is largely due to its use of reinforcement learning and efficient architectures like Mixture-of-Experts (MoE). This approach allows the model to achieve high performance with significantly less financial investment. The company’s ability to train its base model, V3, on a $5.58 million budget over two months is a testament to its cost-effectiveness.

    Overall, DeepSeek’s competitive pricing and open-source nature make it an attractive option for those looking for affordable and efficient AI solutions.

    What are these “tokens” mentioned above? Here’s what Gemini AI told me:

    In the context of AI, particularly in natural language processing (NLP) models, an input token is a fundamental unit of information that an AI model receives as input.  

     
    Here’s a breakdown:

    What they are:

    Tokens can be words, parts of words (subwords), characters, or even punctuation marks.    
    They are the smallest units of text that the AI model processes.    

    Why they matter:

    Understanding: Tokenization allows the AI to break down human language into manageable pieces, making it easier to understand and process. 

    Efficiency: By dividing text into tokens, the AI can process information more efficiently.   

    Flexibility: Different tokenization methods can be used depending on the specific needs of the AI model and the nature of the text.    

    Example:

    The sentence “The quick brown fox jumps over the lazy dog.” might be tokenized as follows:  

     Word-level: “The”, “quick”, “brown”, “fox”, “jumps”, “over”, “the”, “lazy”, “dog”, “.”    

    Subword-level: “The”, “quick”, “brown”, “fox”, “jump”, “s”, “over”, “the”, “lazy”, “dog”, “.” (If “jumps” is not in the model’s vocabulary, it might be broken down into “jump” and “s”)

    Key takeaway:

    Input tokens are crucial for AI models to understand and process human language effectively. They play a vital role in various NLP tasks, such as:  

    Text generation: Creating human-like text, such as stories, articles, and code.    

    Translation: Translating text between different languages.    

    Sentiment analysis: Determining the emotional tone of a piece of text.

    Question answering: Answering questions based on given text.

    #56008

    Threatening Columbia with tariffs was not a very good idea.

    #56009

    _Robert_
    Participant

    Let’s say I picked up $100K ea. worth of these stocks today at these prices. We can check back tomorrow and in a month. Buy the dip.

    NVDA 117
    AVGO 199
    AMD 114
    POWL 235

    #56010

    _Robert_
    Participant

    Threatening Columbia with tariffs was not a very good idea.

    There’s no safe place away from that maniac!

    #56011

    _Robert_
    Participant

    Shocking News: Farmers unaware how sowing and reaping works.

    Shocking news: Farmers discover what happens when they let foxes guard the henhouse.

    Shocking News: Farmers can’t recognize wolves in sheep’s clothing

    #56012

    _Robert_
    Participant

    Let’s say I picked up $100K ea. worth of these stocks today at these prices. We can check back tomorrow and in a month. Buy the dip.

    NVDA 117 -> 119.60 at 10 am

    AVGO 199 -> 203.41

    AMD 114->  113.68

    POWL 235 -> 242.64

    So I would have been up $7408.6647 buying this dip for a few hours, AMD was the only loser, nothing astronomical so market is still concerned about the Chinese AI

     

    #56013

    _Robert_
    Participant

    Let’s say I picked up $100K ea. worth of these stocks today at these prices. We can check back tomorrow and in a month. Buy the dip.

    NVDA 117 -> 119.60 at 10 am AVGO 199 -> 203.41 AMD 114-> 113.68 POWL 235 -> 242.64 So I would have been up $7408.6647 buying this dip for a few hours, AMD was the only loser, nothing astronomical so market is still concerned about the Chinese AI

    Prices at Close today

    NVDA 127.66
    AVGO 207.32
    AMD 114.17
    POWL  238.27

    24-hour net profit for my buy-the-dip idea = $14,832.63

    Sorry to report that I didn’t actually do it, unfortunately.

     

    #56014

    Nvidia should go up again. I am getting some monopoly money out.

    #56016

    PopeBeanie
    Moderator

    I’m not so sure the “AI war” is over so soon. DeepSeek being open source, i.e. not proprietary code, competitors can learn from its design. The story, other than about stock market realignment is about a realignment of how LLMs can be developed and marketed because of the sudden drop in development prices. So in my opinion, the lower costs of development will benefit the entire industry and consumer base in the long run, albeit after the big spenders lose significant credibility in their up-to-now very expensive products. And lower costs will spur more users to buy subscriptions and create more demand.

    NVIDIA takes a big hit, but I’m betting they’ll be able to adapt to the new paradigm quickly. Their designs have been groundbreaking. And there are some important cases where the current LLMs perform better than DeepSeek.

    In any case, it’s possible that China has some geniuses that can keep paving new ways faster than we have and will. There’s a lot about it on youtube now that I haven’t watched, but this one was the most interesting to me. (This is just my opinion, I’m not calling myself an expert on it.)

     

    #56018

    PopeBeanie
    Moderator

    Here’s a new, potential twist.
    Thanks to Reg for telling me about this website. I signed up for their newsletter.
    Please let me know if the link doesn’t work for you.

    From article:
    OpenAI Furious DeepSeek Might Have Stolen All the Data
    OpenAI Stole From Us

    The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?

    Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.

    Here is how the Bloomberg article begins: “Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek, according to people familiar with the matter.” The story goes on to say that “Such activity could violate OpenAI’s terms of service or could indicate the group acted to remove OpenAI’s restrictions on how much data they could obtain, the people said.”

    Also recall the exclusive group of millionaires and billionaires, imo potential oligarchists, presented to us at Trump’s inauguration. I hope this conspiracist believer’s feeling of mine doesn’t last. I can at least reasonably wonder how long ago, and from whom this narrative was first formulated.

    #56019

    PopeBeanie
    Moderator

    This is a good time to mention a kind of “pollution of AI” that’s been said to be possible. That is, when one AI gets all its data from other AI, errors and hallucinations can compound.

    I wish I had the time to get into these stories (and add to the AI group), but I’m in the middle of big life change.

    #56020

    Unseen
    Participant

    AI’s can give bad answers. For example, I asked ChatGPT for a baguette recipe to use up 4 cups of flour and it did give me a recipe which lacked any salt. I pointed out that bread without salt would be unappetizing and the AI admitted the error and modified the recipe to include salt.

    So, placing too much trust in an answer that you can’t verify can result in taking an action that will fail or even produce a dangerous result.

    #56021

    Unseen
    Participant

    As for one AI stealing from another one… Hey, business is war. As long as they aren’t illegally hacking and are using data the other AI gives out freely, it would be difficult to mount a legal counter attack, it seems to me.

    • This reply was modified 1 week, 6 days ago by  Unseen.
Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic.