Watch My Visual Studio Code IDE Setup Video For Fonts,Themes & Extensions
https://youtu.be/Bon8Pm1gbX8
Hi Join the official discord server to resolve doubts here:
https://discord.gg/cRnjhk6nzW
Visit my Online Free Media Tool Website
https://freemediatools.com/
Buy Premium Scripts and Apps Here:
https://procodestore.com/
Show More Show Less View Video Transcript
0:01
Hello guys, in this live stream I will
0:03
actually show you how to use Claude Code
0:07
forever free.
0:09
Claude Code is actually a software by
0:11
Claude.
0:13
So, they actually charge subscription.
0:16
But,
0:17
you can actually use locally
0:19
AI model inside it through Ollama. So, I
0:23
will show you the full setup. So, for
0:25
this tutorial you should have at least
0:27
Claude Code and Ollama installed. So,
0:30
for if you don't have Claude Code
0:31
installed, simply go to this and
0:33
directly download this
0:36
by executing this command. If you are on
0:38
Windows, simply open PowerShell
0:41
and directly paste
0:43
this command to install Claude Code.
0:47
I've already installed it. So, this
0:49
hardly take 5 to 10 seconds to install
0:52
it. So, once it installs, you will get
0:54
notification. The second thing you must
0:57
have is Ollama.
0:58
Ollama is a actually software where you
1:01
actually download AI models,
1:04
open source AI models which you want to
1:06
directly install.
1:07
So, you can take any model, for example,
1:12
any open source model. So,
1:14
depending upon your machine. So, you
1:16
your machine should be strong enough to
1:18
actually run the model.
1:20
So, let's suppose I take this example,
1:23
when 3.5, which is a Chinese based
1:25
model.
1:26
Uh
1:28
Uh so, just I have almost 32 GB of RAM.
1:33
So, if you want to locally run the
1:34
model, you should have at least good GPU
1:37
and RAM. At least 32 GB GB of RAM should
1:40
be there.
1:41
So, as you see, uh you should be have
1:44
down Ollama. You will simply click the
1:46
download button if you haven't already
1:49
downloaded it.
1:50
I have already have Ollama also
1:52
installed.
1:54
So, as you see,
1:57
Ollama is already installed in my
1:59
system.
2:00
So, right here,
2:01
now
2:05
you can actually
2:07
chat with the model. So, I have already
2:10
installed this
2:16
when three
2:18
8 billion parameter. I have already
2:21
installed this one.
2:23
when three
2:25
8 billion parameter.
2:36
So, here I can actually ask any question
2:38
here. As long as you type Claude,
2:46
as you simply type Claude,
2:48
you simply say I trust it's this folder
2:51
and Claude Code will directly open right
2:53
here.
2:54
So, by default, it directly uses this
2:56
model, which is
2:58
a paid model,
2:59
Claude Sonnet 4.6. Now, if you want if
3:03
you want to just use the locally
3:07
So, my model is running through Ollama.
3:10
So, there is a single command that you
3:12
need to write here. First of all, you
3:14
need to set the base URL and the API
3:18
key.
3:19
So, simply close Ollama.
3:24
Control C. Simply close it and paste
3:26
this command.
3:28
This we are setting two environment
3:30
variables, set open AI
3:32
base URL.
3:34
So, this needs to be same.
3:36
Simply type this, open AI {underscore}
3:39
API {underscore} base is equal to HTTP
3:42
localhost
3:44
wherever your model is running. So,
3:46
simply
3:47
your address and then the open AI API
3:50
key. So, this needs to be set to Ollama
3:54
and then and then Claude.
3:57
So, this is a command here.
4:00
Simply pause the video and write this
4:01
and enter the command.
4:03
As soon as you enter the command, just
4:05
press I trust this folder.
4:08
And now this time,
4:17
So, now as you
4:18
do this,
4:20
now the actual model which will be used,
4:22
which
4:23
will be running through Ollama.
4:26
So, whatever you ask right here,
4:31
I think
4:34
for this,
4:39
let me just wait here. Yeah, so
4:43
I think the model needs to be the same.
4:54
Yeah, so these
4:59
I think these
5:01
environment variables need to be like
5:04
this. So,
5:06
So, just close this.
5:14
So, first of all, set this variable, set
5:17
Anthropic
5:23
auth token
5:25
is equal to
5:26
Ollama
5:29
and set Anthropic
5:32
>> [snorts]
5:36
>> {underscore} base URL.
5:40
And this needs to be set to the actual
5:42
address wherever your
5:44
model is running. So, by default, it
5:47
runs on localhost
5:50
114.
5:55
So, this is the default port number on
5:57
which the model will be running. So, you
5:58
just need to set these two environment
6:00
variables.
6:02
After you set this,
6:05
you simply type here Claude and then the
6:08
actual model name that you are running.
6:12
So,
6:13
if you see we are running this model.
6:26
Now, to actually check the model name,
6:28
simply
6:30
open Ollama.
6:35
when three
6:43
This is the name of the model, when
6:45
three
6:47
8 billion when
6:49
three
7:04
So, just paste the same name.
7:07
So, first of all, write here start it.
7:11
And now the model is started.
7:13
And here also you start this.
7:26
And now the Claude Code will start.
