Ok I did this by hand and noticed my number came in a lot lower than these at $148,644. Found the difference. The Deepthink and ChaptGPT models are missing a BIG deduction. The Qualified Business Income (QBI) deduction. I understand there is the 50% W-2 or 2.5% UBIA phaseout that kicks in at $380,000 taxable income for MFJ. However, in this scenario the QBI deduction would be worth probably about $84,868 in deduction. With such a large deduction the AI should have asked clarifying questions otherwise at the 35% tax rate they're leaving about $29,700 on the table. Also should be noted I used 2024 tax brackets.
@HectorGarciaCPA18 сағат бұрын
Thanks!... there you go... There are two big questions here, which is wether the two K-1's and Schedule C would qualify for QBI
@t2p5g411 сағат бұрын
Very informative video. Thank you. I am in Canada and had a client with a German Pension. I needed to translate it into English. We scanned it as a first step. I found that Claude worked best, because it could use a fixed width font in the artifacts window to replicate the columns.
@HectorGarciaCPA8 сағат бұрын
I like Claude for that stuff
@t2p5g411 сағат бұрын
I wonder if you could upload all the applicable IRS publications to notebook LLM and see how it could do.
@HectorGarciaCPA11 сағат бұрын
In Theory yes, thats what TaxGPT and CPApilot is for
@AnuragSingh-zn3nu20 сағат бұрын
so who's the most accurate according to the your calculation?
@HectorGarciaCPA13 сағат бұрын
They were technically all OFF, as none of them considered QBI, but that can easily be trained.
@AnuragSingh-zn3nu12 сағат бұрын
@@HectorGarciaCPA thank you.
@pluggedinprofits.4 сағат бұрын
Thank you Hector, this was fantastic test. Good to know that our clients will as confused as us till further notice 🤣 Also, i personally ranked Deepseek and Claude over gpts heck even perplexity over gpt.
@fernandoortiz606912 сағат бұрын
what is the correct amount?
@HectorGarciaCPA11 сағат бұрын
none of those were 100%... but ChatGPT o1 and DeepThink R1 were closest
@kalusallisudusalli77216 сағат бұрын
You need to click the deepsek search button to get the current values
@HectorGarciaCPA14 сағат бұрын
I did click on that
@marvinsafi20 сағат бұрын
thanks for sharing
@HectorGarciaCPA18 сағат бұрын
You're welcome!
@mbohdanowicz7715 сағат бұрын
For context these are zero shot questions with no guidance or input sources. Wrappers are done.
@HectorGarciaCPA14 сағат бұрын
Wrappers?
@tamanpara26828 сағат бұрын
I think he means the accounting profession is about to be decimated if not obliterated by these AI learning models. Ditto design engineering. About to get a lot less costly for the end customer of these high end services.
@tamanpara26828 сағат бұрын
@@HectorGarciaCPAI think he means the accounting profession is about to be decimated if not obliterated by these AI learning models. Ditto design engineering. About to get a lot less costly for the end customer of these high end services.
@mbohdanowicz777 сағат бұрын
@HectorGarciaCPA fellow beancounter here. A wrapper is a pre-configured app/front-end that uses a frontier model such as open AI or Gemini as it's "brain". Pre configured directions/instructions on its purpose and output with structured sources to draw on, such as state and federal tax libraries or case precedents. This improves the quality of the response.
@HectorGarciaCPA7 сағат бұрын
@ and you think with Deepseek those wrappers are toast?
@nuqwestrКүн бұрын
Running locally vs Subscription model. R1 still needs the large language model.
@HectorGarciaCPAКүн бұрын
I did not run it locally, I used all the webbased tools in this video
@nuqwestrКүн бұрын
@@HectorGarciaCPA I'm confused then, it is my understanding R1 requires LLMs to be taught before it runs, faster yes, but still requires the subscription based data?
@nuqwestrКүн бұрын
R1 runs on your home based chipset, Groq is a specifically designed chipset for LLM?
@nuqwestrКүн бұрын
To run **DeepSeek R1**, the minimum chipset requirements depend on the model size you're using. Here are the general requirements: - **For the 7 billion parameter model**: At least an **NVIDIA RTX 3060** with **12GB of VRAM**. - **For the 33 billion parameter model**: At least an **NVIDIA RTX 4080** with **16GB of VRAM**.