Installing Spark into Linux System
Nov 28, 2024
Installing Spark into Linux System
View Video Transcript
0:00
in this video we are discussing
0:02
installing spark into Linux system and
0:05
we shall discuss step by step so at
0:08
first visit this link and find spark
0:11
related to your Hadoop version so link
0:13
has been provided so if you go to this
0:15
link you will be getting multiple
0:17
different versions for spark so that
0:19
version has to be selected which is
0:21
matching with your Hadoop version export
0:24
the files into your local system edit /
0:29
et Cie / profile file and add spark /bin
0:33
path into the path variable and then run
0:36
again slash it is a slash profile file
0:39
to activate the paths and in the
0:42
terminal run Scala - shell to start the
0:47
shell and if you can do this one then
0:49
obviously the spark installation will be
0:52
completed so let us go for one practical
0:54
demonstration for the easy understanding
0:56
of these steps in this demonstration we
1:00
are going to show you that how to
1:01
install spark on our system so at first
1:05
we are going to download the spark here
1:08
from the link as it has been shown here
1:10
and as we are dealing with the Hadoop
1:13
2.4 so the respective 2.4 version of the
1:17
spark will be required for our
1:19
installation for the compatibility
1:21
issues so here the download link for the
1:24
spark is also mentioned for the Hadoop
1:27
2.4 but also we can search it from the
1:31
list of all the other versions of the
1:33
spark available so at first we are going
1:35
for the copy of this download link the
1:38
URL is getting copied and pasting it
1:40
here now he can find that respective
1:48
version that is Hadoop 2.4 the spark has
1:52
to be downloaded for the Hadoop 2.4 here
1:54
so going for the search so we have got
1:58
it clicking here save the file and then
2:02
ok so the file is getting downloaded
2:04
onto our download folder
2:18
so the download has been completed
2:19
successfully so here we are having the G
2:24
file has got extracted well under the
2:26
home folder we are going to create one
2:28
folder with the names park and then log
2:34
into that folder we are moving for con
2:36
control a that is a select all and then
2:38
we drag all the files on to the spark
2:40
folder closing the download folder here
2:46
now we are going to set the respective
2:49
path here so we are going for sudo G
2:52
edit / etc' slash profile so this
2:57
particular file has to be updated with
2:59
the path giving the password so coming
3:04
at the end making a space here so this
3:12
is that
3:13
specta path to be copied so that his
3:15
export path is equal to dollar path
3:17
colon slash home big data slash spark
3:20
slash bin so this respective path has to
3:23
be put to that slash ATC slash profile
3:27
file in the copy and pasting it at the
3:33
end saving it
3:34
closing it now coming to the terminal we
3:39
are going to execute the profile so that
3:41
the path will get reflect will get
3:43
effective and now we are going for this
3:46
spark execution so how to execute this
3:50
one spark - shell without having any
3:53
blank space in between you can find that
3:55
the spark is getting executed and in
3:59
this we have shown you that how to
4:01
install spark on a word system so you
4:05
can find the scholar of prompt has come
4:08
here so there is a scholar prompt so in
4:10
this way you have shown you that how to
4:12
install spark in our system and what are
4:15
the different steps we require to do so
#Programming