https://gist.github.com/gauti123456/aec9cf454c4aa6287c0bf20e0bf623ce
Watch My Visual Studio Code IDE Setup Video For Fonts,Themes & Extensions
https://youtu.be/Bon8Pm1gbX8
Hi Join the official discord server to resolve doubts here:
https://discord.gg/cRnjhk6nzW
Visit my Online Free Media Tool Website
https://freemediatools.com/
Buy Premium Scripts and Apps Here:
https://procodestore.com/
Show More Show Less View Video Transcript
0:00
Hello guys, in this video I will
0:02
actually show you how to run and set up
0:04
Ollama inside your VS code
0:07
directly for autocomplete, for
0:09
generating code. So, [snorts] this will
0:11
be a perfect video for you.
0:13
So, for the
0:15
to set up Ollama right here, you need an
0:17
extension in VS code.
0:19
So,
0:20
this extension is quite popular. If you
0:22
simply go to extension tab,
0:25
simply type here continue. And it's
0:27
actually an open-source AI coding agent.
0:30
And it's really popular extensions. It's
0:32
got almost
0:34
2.7 billion installs.
0:37
So, click the install button, simply
0:39
install this extension. I've already
0:41
installed it.
0:42
And once you install this, this
0:44
extension will appear right here. You
0:46
can see in the sidebar section, simply
0:48
activate the extension.
0:50
And after you activate this,
0:52
you just need to configure it. So, go to
0:55
the settings option right here, if you
0:56
see.
0:57
Now, to set up Ollama right here, you go
1:00
to the model section right here, if you
1:02
see.
1:04
And just click this. And now in the
1:06
autocomplete section right here, exactly
1:09
you tell
1:10
what sort of model you want to use. So,
1:14
this is for chatting, for general
1:16
chatting, this is for autocomplete as
1:18
you write code, what model you want to
1:21
use.
1:21
So, this model, first of all, you need
1:23
to install inside your machine. The
1:25
perfect way by which you can install any
1:27
model is through Ollama.
1:30
Go to ollama.com.
1:32
And first of all, simply install this as
1:35
a graphical user interface.
1:37
If you don't have Ollama installed, you
1:40
can actually download this for
1:41
completely free.
1:43
Simply type Ollama download. And
1:47
for your respective operating system,
1:48
it's a cross-platform software.
1:50
Simply click the download button. Simply
1:53
download this. I've already downloaded
1:55
this. So, once you launch Ollama, it
1:57
will simply is a graphical user
1:59
interface software.
2:01
And inside the model section,
2:04
you can actually download any kind of
2:06
model.
2:07
So, the perfect model for this choice
2:09
will be
2:11
depending upon your machine.
2:13
If your machine is not powerful enough,
2:15
then I will suggest
2:17
to download a model which is
2:20
not too much powerful. So, if you simply
2:23
write Qwen right here,
2:25
you will see this model here, which is
2:27
Qwen 2.
2:30
I've already downloaded this, Qwen 2.5
2:32
coder.
2:33
This is a perfect model for a perfect or
2:37
a
2:38
powerful machine. So, I will
2:40
I will take this example. It's simply
2:43
986 megabytes. You can install any kind
2:45
of model depending upon your machine. If
2:47
you have a powerful machine, you can
2:49
download
2:50
powerful models as well.
2:52
So, I will simply take this example,
2:54
simply copy the model name. And now to
2:56
install this, it's easy. Simply go to
2:58
the command line and simply type this
3:00
command, Ollama pull.
3:03
And then the name of the model.
3:06
So, simply execute this command. This
3:08
will simply install this model.
3:10
It will download the model first of all.
3:13
And depending upon your internet speed,
3:15
it will after it downloads, it will give
3:18
you a message that
3:19
model is successfully installed.
3:22
And then if you type the command Ollama
3:24
list, it will exactly show your model is
3:27
successfully installed. So, as you see
3:29
that. Now, after it installs,
3:32
you simply again restart VS code and
3:35
simply type the model name right here,
3:37
whichever model name that you are using.
3:40
You can configure it.
3:41
If you don't have this, simply click the
3:43
config option.
3:45
And this will open this config.yml file.
3:49
If you need this full file, I've given
3:51
the link in the description. You can
3:53
find the full file. Simply copy-paste
3:55
it.
3:57
Simply open this and go to the
3:59
description link. I've given the full
4:01
source code of this file.
4:04
And just replace the model name,
4:05
whichever model name that you are using
4:07
here. This is for general chatting, this
4:09
is for autocomplete
4:11
right here. This is Qwen 2.5 coder, 1.5
4:14
billion.
4:16
You're using it for autocomplete. So,
4:18
after you do that,
4:20
now once again, as you write any kind of
4:22
file here,
4:26
as you write the code here, it will give
4:28
you
4:29
autocomplete suggestions.
4:34
So, you will see that these suggestions
4:35
now will come. As you see that, it's
4:37
suggesting you.
4:46
So,
4:49
you can see it is giving you suggestions
4:51
here.
4:52
This is just you can see, it's it's good
4:55
for basic autocomplete suggestions
4:57
because this is only a very basic model.
5:00
The size is only 986 megabytes. Context
5:04
is also 32,000. So, it will not suggest
5:07
you a lot, but it's very good model for
5:10
basic autocomplete suggestions. And
5:13
also, if you open continue,
5:16
basic chatting also we are using.
5:19
You can configure the same model.
5:21
So,
5:23
just replace the name right here.
5:27
The model name here, so
5:29
for
5:30
if you want basic chatting functionality
5:33
also, you can use the same name and same
5:36
model.
5:37
>> [snorts]
5:37
>> You can also use a different model as
5:39
well for chatting here. So, once you
5:41
configure this, again go to continue.
5:46
And right here, if you ask anything
5:48
right here,
5:50
so in this easy way, you can configure a
5:52
local
5:53
AI model directly inside your machine.
5:56
So, this is completely unlimited. Now,
5:58
you are hosting it through Ollama.
6:01
And let's suppose I simply say, give me
6:04
top Node.js frameworks. You can ask any
6:06
kind of question as well.
6:08
2026.
6:10
So, as you type this, you will see
6:11
exactly it will tell you
6:14
based upon as of 2026.
6:17
You can also say that build a
6:21
image to PDF.
6:25
So, for basic questions, this will be a
6:28
very fast model as well.
6:30
Depending upon your machine, if you
6:31
don't have a very powerful machine, then
6:33
you can install this model. Simply
6:35
configure it. The process will remain
6:37
the same for any kind of model. You
6:40
first of all install this through
6:41
Ollama. And then configure it inside VS
6:45
code.
6:46
So,
6:47
this is a basic approach by which you
6:49
can configure
6:51
and use Ollama directly in VS code
6:54
through this plugin, continue.
