Sync Salesforce Knowledge Articles to External Systems
Nov 11, 2025
Blog Post:
https://www.infallibletechie.com/2025/11/sync-salesforce-knowledge-articles-to-external-systems.html
Show More Show Less View Video Transcript
0:10
Hello everyone. In this video, we are
0:13
going to see how to sync
0:16
Salesforce Lightning knowledge articles
0:20
to an external system.
0:25
The use case what we are going to
0:27
discuss in this video is
0:30
I have Salesforce knowledge uh
0:32
Salesforce Lightning knowledge articles
0:35
and I wanted to sync the published
0:39
Salesforce knowledge articles to my
0:42
external uh system.
0:45
The source of truth for Salesforce
0:48
Lightning uh knowledge articles will be
0:51
Salesforce application. I don't want to
0:53
replicate whatever is there in
0:55
Salesforce in my external application. I
0:58
wanted only the updated published
1:01
Salesforce Lightning knowledge articles
1:04
in my external system.
1:07
The first step in order to achieve that
1:10
is I have to fetch the Salesforce
1:13
knowledge articles from
1:16
knowledge
1:18
kap object where publish status should
1:24
be online.
1:26
Whenever we create a Salesforce uh
1:29
lightning knowledge article the status
1:31
will be draft. Once it is uh published
1:35
the status will be set to online or and
1:39
in the screen we will see that as
1:42
publish published
1:44
let's say once the knowledge article is
1:47
published I wanted to do some changes
1:50
for that I have to make use of edit as
1:52
draft button
1:54
and I have to create a new version of
1:58
the Salesforce uh lightning knowledge
2:01
article. Once the draft version is
2:04
created, I will I I will be able to
2:07
update the uh knowledge article. Once
2:10
the changes are done, it will be
2:13
published. The older version will be set
2:16
to archived and the newer version will
2:19
be set to published that is online.
2:24
So here I'm getting all the published
2:27
knowledge article. Here I'm making use
2:30
of query endpoint.
2:33
I don't want to fetch any deleted
2:35
records. So I'm using the query
2:37
endpoint.
2:40
So this is for initial load. I will be
2:42
able to get all the Salesforce lighting
2:44
knowledge article from my Salesforce or
2:46
and then I should be able to create all
2:48
those records in my external system.
2:53
The next step is
2:56
syncing the changes that happens in
2:58
Salesforce with my external system. In
3:02
order to get that, what I'm going to do
3:04
is I'm going to fetch the knowledge
3:07
article where publish status is online
3:11
or archived and the system mod stamp
3:14
date is greater than or equal to
3:16
previous run date time. If I'm going to
3:18
run every hour, so for so if I if the
3:22
time is 7 p.m. today, I would have run
3:25
that at 6 p.m. So at 7 p.m. I will set
3:30
the previous uh run uh uh time as uh 6
3:34
p.m. So that it will try to find the
3:36
delta between 6 and 7 p.m. and then I
3:38
will be able to get the changes that
3:40
occurred between that particular time.
3:43
Here I'm using query all endpoint
3:47
instead of query. Whenever we use query
3:50
all endpoint, we will be able to get the
3:53
deleted
3:54
uh knowledge articles also. In order to
3:57
delete it, the article should be
4:00
archived first.
4:02
Okay. Let's see
4:06
simple use case here. Here
4:09
the article is is is in a published
4:12
status. In order to make some changes I
4:15
have to click edit as draft.
4:20
It will create a new version.
4:38
Okay. So, in order to create a new
4:40
version, I'm clicking edit as draft.
4:44
It created a new version. I will be able
4:46
to do some changes and then I'm creating
4:49
it. Now, this is the draft version and
4:53
this is the currently published version.
4:58
In order to publish the latest version,
5:01
I I will click publish and the previous
5:04
version will become archived and the
5:06
current version which is in draft will
5:09
became u
5:12
uh published that is online.
5:15
If somebody goes and deletes
5:18
the draft version
5:22
so for example I'm going to publish
5:24
this.
5:26
So this one is uh
5:29
the
5:33
published one with all the latest
5:35
changes and this one became archived by
5:40
mistake. If somebody goes and
5:48
delete this uh
5:51
previous version,
5:54
then I want I have to get the deleted
5:57
record and clear that in my external
6:01
system. Instead of having uh um unwanted
6:05
data in my external system, I should I
6:08
should be able to clear that. So in
6:09
order to catch this particular scenario
6:11
I'm using query all so that I can get
6:14
all the soft deleted
6:17
um archived knowledge article and then
6:19
clear that in my external system if
6:22
anything is available.
6:25
Okay.
6:29
So whenever uh we are using Salesforce
6:31
REST API SOQL query execution only the
6:35
first 2,000 records will be fetched. For
6:38
the next set of records we have to make
6:42
use of next records URL and we have to
6:46
keep checking this done attribute.
6:49
Whenever this done attribute is true for
6:51
example here it is true. So I didn't get
6:56
next records URL.
7:00
So I was able to get all the records by
7:03
doing subsequent API calls. If you use
7:07
query all and also the query endpoint,
7:12
you should see next records URL. If it
7:14
couldn't fetch all the records in one
7:17
API call, you will have the path
7:21
for the next API uh call. So make use of
7:25
this path to your Salesforce my domain
7:27
URL and then you should be able to fetch
7:29
the next set of records. Keep doing it
7:32
until done attribute is set to true.
7:39
Next in order to make this syncing uh
7:43
easy and seamless I am making use of ID
7:46
as the unique identifier on
7:48
knowledge_kav.
7:52
We can also make use of knowledge
7:54
article ID and uh we we should be able
7:57
to fetch whether it um the existing uh
8:00
uh record is available for this
8:02
particular knowledge article ID and then
8:04
we should be able to update the existing
8:06
records. But I wanted to make this uh
8:09
syncing very uh simple. whenever the
8:13
status of the
8:15
um
8:17
uh knowledge article is online that is
8:20
published then I wanted to create that
8:22
particular record in my external system.
8:25
If it is archived then I have to find
8:28
that particular archived entry in my u
8:31
Salesforce um sorry in my external
8:34
system and then I should clear it up so
8:36
that the syncing will be very seamless
8:39
instead of having a complex logic in my
8:41
external system um to find whether it
8:44
exist existing or not whether something
8:48
got changed or not like that instead of
8:50
having so much of process in my external
8:52
system I wanted to make it very simple
8:54
and seamless. So that's the reason
8:56
whenever it is online I'm inserting a
8:58
record whenever it is archived I'm
9:00
finding the existing record and then I'm
9:02
clearing it up from my external system.
9:06
One biggest caveat with this solution is
9:09
if somebody goes and hard delete the
9:14
uh archived um knowledge articles then
9:18
we won't be able to clear those uh
9:21
unwanted records in our external system.
9:24
So we have to um inform our Salesforce
9:27
admins, developers and even the users
9:30
not to clear their
9:34
um recycle bin whenever they delete any
9:37
knowledge articles. So that whenever the
9:39
job runs, it will try to find the
9:42
archived articles which are not which
9:46
are not up to date. Um because it we
9:49
would have created published uh new uh
9:52
articles. So those information should be
9:54
there in our external system. Unwanted
9:57
archived data shouldn't be available in
10:00
the external system. So uh make sure um
10:04
that particular scenario is not
10:06
happening um so that we are uh avoiding
10:10
a duplicate records in our external
10:12
system.
10:17
Please check the video description. In
10:19
the video description I have shared my
10:20
blog post. from the blog post, you
10:23
should be able to get the sample SOQLs
10:25
that were referenced in this video.
10:32
I hope it was helpful.
10:39
Thank you for watching.
