LM Studio on Linux: The Easiest Way To Run Local AI (No Cloud Needed)
Duration
26:55
Captions
1
Language
EN
Published
Dec 8, 2025
Description
Take control of your AI workflow with LM Studio on Ubuntu 24.04 a powerful, beginner-friendly tool that lets you download, run, and experiment with open-source AI models entirely on your own machine. In this video, I walk through installing LM Studio, exploring the interface, loading models, and enabling the local API so you can build your own AI tools without relying on the cloud. Whether you're a Linux user, a homelab builder, or you're simply curious about local AI, this guide gives you everything you need to get started. https://lmstudio.ai Rocky Linux Supported by CIQ: https://ciq.com/products/rocky-linux/ CompTIA Linux+ Certifcation Course https://youtu.be/qNxuTRCRjoQ Remember to Like, Share, and Subscribe if you enjoyed the video! Also, if you are interested in more Linux content, please consider becoming a channel member so I can continue to produce great content! ✔️RECOMMENDED LINUX BOOKLIST ------------------------------- Linux Pocket Guide: Essential Commands: https://amzn.to/3xGPvsK CompTIA Linux+ Certification All-in-One Exam Guide: Exam XK0-004 https://amzn.to/3uQ3wmh 101 Labs - CompTIA Linux+ https://amzn.to/3vtj7rb How Linux Works: What Every Superuser Should Know https://amzn.to/3vrLkOO Linux Bible https://amzn.to/3rwEkPH ✔️SOCIAL NETWORKS ------------------------------- KeepItTechie: https://keepittechie.com/ Facebook: https://www.facebook.com/KeepItTechie Twitter: https://twitter.com/keepittechie Instagram: https://www.instagram.com/keepittechie/ Discord: https://discord.gg/RjZWuyd -------------------------------- ✔️RECORDING EQUIPMENT ------------------------------- Insta360 4K Webcam - https://amzn.to/3RddfgZ Rode Procaster Microphone - https://amzn.to/42RSInF RØDE RØDECaster Duo - https://amzn.to/4ct1T1X Cloudlifter CL-1 Mic Activator - https://amzn.to/4ic7BXv Logitech LED Streaming Light - https://amzn.to/4j7Z8FT -------------------------------- 0:00 – Intro & Why Local AI Matters 0:40 – What Is LM Studio? Features & Platforms 2:30 – Installing LM Studio AppImage on Ubuntu 24.04 7:10 – First Launch, Model Download & Performance Check (htop) 9:25 – LM Studio Interface Tour (Chat, Models, Server, Settings) 18:30 – Using LM Studio’s Local API with curl 23:02 – Tips, Hardware Advice & Final Thoughts + Motivation
Captions (1)
What's up everybody? Welcome back to
Keep It Techie, where I help you learn
Linux and break into the tech field one
command at a time. I'm Josh and today
we're checking out a tool a lot of
people have been asking about, and
that's LM Studio. And if you've been
curious about running AI models locally
on your own hardware with no cloud, no
subscription, total privacy, then LM
Studio [music] is a great starting
point. Now, before we jump in, do me a
quick favor. Go on, hit that like button
and [music] subscribe if you want more
Linux tutorials, home lab projects, and
open source tools. Let's get into it.
All right, so I pulled up their website.
It's lmstudio.ai.
And LM Studio is basically a desktop
application that lets you run large
language models entirely on your
machine. It could be a laptop, it could
be a desktop, as long as you got the
hardware and it'll adjust based on the
hardware that you have. and [music] run
those AI models on your system. Now,
some standout features include a clean
UI for downloading and managing models,
a built-in chat GPT style interface,
also CPU and GPU acceleration, and then
local API compatibility, API support,
also works on Linux, Windows, and Mac
OS. So, if you have a Mac, you can run
this on your Mac. You can run this on
your Windows computer as well. Now the
big appeal is its simplicity. Instead of
juggling command line options or
configuring multiple tools, LM Studio
gives you a unified workspace. As you
can see, if we scroll down, that is the
standard across all platforms. Now,
here's why I think LM Studio is worth
checking out. For one, you got your
local only AI. Your data never leaves
your device. It's beginner friendly. You
don't need deep AI knowledge to get
started. Open AI compatibility API, easy
integration with scripts and apps. Also,
flexible model options. So, you got
Llama, Jimma, Mixurro, Pi, and more. And
this is awesome because it works great
in home labs because you can run it on
your desktop and even on a a VM or
server. And a quick note, LM Studio can
be used alongside Olama and there are
ways to bridge the two so they share
models, but we're not going [music] to
cover that in this video today. Just
know the option exists if you're running
Olama elsewhere in your home lab. Now,
let's go on and hop over [music] to our
virtual machine so I can walk you guys
through the process of getting this
thing set up. All right, so I'm logged
into my virtual machine. This is
Yubuntu24.04
4 and LM Studio ships as a [music] app
image for Linux and that's what we'll
install today. So, what you want to do
first is head over to LM Studios website
and let's give it a couple seconds, but
let's search for LM Studio. Just go to
the website right fast. I know [music]
it's a.ai. So, let's just go to it.
Search right fast. And what we want to
do is download the app image. And so,
that'll download into our downloads
directory. All right. So, we good.
That's our app image is stored in, like
I said, in our downloads directory. But
the first thing we need to do is open up
the terminal. That's one thing I don't
like about Ubuntu. I wish they would pin
the terminal here. So, let's add it to
favorites right fast. And we can zoom in
a little bit so you guys can see it a
little better. And first thing you want
to do is run [music] update. So, you go
pseudo apps update just to verify you
don't have any [music] updates for the
system, which I think this system has
been updated. Just make sure and then
there are a couple dependencies you want
to get installed [music] from one is the
fuse package and I'll show you guys that
it requires app images require fuse and
then I think wget is used to download
the models which I'm sure it is already
installed and I'm going install
something else right fast too. So, let's
go pseudo apps [music] install and let's
just install a lip fuse uh 2. That's the
right package name. And then let's just
verify get is on there, which [music] I
know it's on there. And then on this
server or on this system, I know for
some reason it don't come with htop. You
would think htop would be a default
application on here, but anyway, as you
can see, it's going to install HTOP and
Fuse [music] on the system, and that'll
allow us to run our app image. What's
up, y'all? If you've been watching my
channel for a minute, you already know I
stay talking about Linux. And if you're
looking for a solid, [music] reliable
enterprise Linux distro, let me put you
on to Rocky Linux. This is the go-to
replacement for SOS. And it's built for
the community by the community.
It's got everything you need for [music]
a stable and secure Linux experience,
whether you're running servers, home
labs, or enterprise workloads. And the
best part, it's backed by CIQ, making
sure it stays rock [music] solid for the
long haul. So, if you're tired of these
companies pulling the plug on your
[music] favorite dros, Rocky Linux is
where you need to be. And I've covered
Rocky Linux [music] before. And trust
me, it's worth checking out. So, head
over to rocky linux.org to learn more
[music] and get started. Keep it techy.
Peace. All right, so we good. We got
those two packages installed. Now, one
thing you can do, you can run it
downloads directory if you want to, but
what you have to do is go into
properties first. I'm going to show you
guys the guey way, but then go into
permissions. And then what you want to
do is go down to allow executing file as
a program. And that will change those
permissions or make it executable, which
I'm not going to do it that way. I'm
going just go into the terminal. I'm
going to do it that way in the terminal
because what I'm going to do is move it
and make cleaner. Put it in a better
location. I'll put it in the op
directory. That way it's a just a little
bit more cleaner. This will I don't know
just make the application a little bit
more cleaner for your system and I
recommend you guys follow this. So what
I'm going to do is type pseudo and we
have to spell that. Let's go move and
then [music] let's go under our
downloads directory and then the LM app
image. And what we're going to do is
move it under our app directory and
we're going to name it something
different. I'm going to name it LM
Studio app. And you got to make sure you
spell it right. So app with a capital A
and image with a capital I. Go press
enter. It'll move it over there. And as
you see it disappeared from my downloads
directory. Now you want to go and make
it executable over there. So we need to
use pseudo because that directory is not
owned by my user. So we have to use
pseudo. It's owned by root. So we have
to specify it to make it executable. So
let's go under our app directory. And
then we need to look for that lm studio
[music] app image. And then what we want
to do is link it. What I'm going to do
is link it to our user local bin. That
way we could just run the command.
That's what I'm saying. So let's go ln
as for link. Let's go s and then we're
going to link our ops lm studio image.
And we're going to link that to our user
directory. And then we want to go local
bin. And then we just want to name it LM
Studio. And that's good to go. All
right. So, with that being done, all we
have to do is launch the application by
typing LM Studio. It'll know where it is
because it's in that bin directory. And
so, it'll find it. Boom. So, it'll load
up. And this is the first time we've
opened it up. So, it's going to go
through a little setup. So, I'll walk
you guys through that right fast. And
boom. There we go. Local AI on your
computer. Get started. Now, when this
thing is first booting up on my virtual
machine, it's going to slow down a whole
lot. At least for me. But if you got
like a better computer, you know what
I'm saying? Faster computer, you should
be fine. Or a laptop or something that
has a pretty good specs, you should be
good. And that's why I wanted to install
Htop so we could look at what's going
on. As you see, it's maxing out our CPUs
and all that stuff. We can watch what's
going on while it's going through the
process. And this is going to ask you
what level of basically logs you want to
see or the output that you want to see.
You could do power user, you can do
user. It's basically saying I'm just
getting started with AI or I know what
I'm doing or show me everything. If you
want to be a developer or you want to
have the developer level, I'm going just
select the power user. That's fine. One
thing it's going to do is look for a
model that will work on your system. And
it'll also add LM Studio command line to
your path. So, that's another additional
setting that it will add to your system.
So, let's go on and download this model
that it recommends. It's only 2.5 gigs.
[music] And so, just let that go until
it finishes. You can explore the app. I
would I recommend you just wait for it
to finish cuz every time I've clicked
that when I've been playing around with
LM [music] Studio, it'll make the
download fail of the model. So, I don't
know if that's a bug or something, but
it has made it fail for me. I don't know
if other people have seen that or if
that's just a bug they need to work on
or something. I don't know. But whenever
I don't go in there, it'll download the
full model and then we're good to go.
But I'll be back when this finishes. All
right. And just my luck, that mug
failed. So, what we're going to have to
do is just start the download again.
It's fine. It'll continue from where it
stopped at. But this right here is just
showing you some of the new features in
this latest version. And let's go back
under here under our downloads. And this
will allow us to resume it. So, let's
just let it go. And I'll be back when it
finishes. All right. So, it finished
downloading. And what you can do is load
that model right away. I'm going to go
and close this, but you hit load model.
And what it's going to do is load it in
there as your main model. And then you
can start chatting. But let's go over
the interface right fast. So, obviously,
this is your chat window. Right here is
the developer window. This allows you to
share your models. It It allows you to
set up a server. So you can share it
over the network and it's basically a
API tab and you can enable LM Studio
locally and it's like a open AI
compatible API and right now as you can
see it's turned off by default but you
can set all this up. You go into here
that's your server ports. You can set up
whatever ports you want. You can serve
it on the local network allow per
request like for MCP servers just in
time model loading all that good stuff.
And there's also examples on how the
imports work. And then also the logs
down here. And then you can go through
it's a lot more information here. You
can go in the context interface. This
gives you custom fields settings like
the temperature sampling. What else?
Structured output. You can do JSON
scheme. Let's see what else is on here.
Speculative decoding if you [music] need
it. And then let's see. Load. So, this
just goes through and you can make
adjustments under here for the load,
especially if you have a GPU or
something if you want to offload certain
things to a GPU. All right, so that's
enough about the server. Now, under
here, this shows you all your models you
have installed and as well as the
location where those models are stored.
So, this is our models directory. So,
under my home directory, there is a
hidden folder called LM Studio and that
is where my models are stored. [music]
And right now, we have this one model.
That's the only model we have currently.
And then it would click under here under
the search or the discover. This will
open up these settings and this will
[music] allow you to search for other
models if you want to get another one
installed. [music]
And there is a status pick. You can
refresh that. You can do best match.
[music] Uh and like I said, this thing
will show you based on what you run on
your system. It won't show you like big
models. Now, you could change this where
it'll show it'll show any and
everything, but you may not be able to
run it on your system. You know what I'm
saying? Because it may be like a huge
model that you need more RAM for. And as
you can see down here, it says based
[music] on a calculated device memory of
7.76 GB. So I have 8 gigs on this
server, which is equivalent to 7.76 GB
as you can see under my hype. That's why
I open up HTtop. So that's the max I can
use. It's calculating that and only
showing us models based on the amount of
RAM I have. And like I said, you can
look at some some of the big ones if you
want to, but it'll adjust based on what
you have on your system. [music] So,
just be wary of that. And also, let's go
under runtime. This will show you your
extensions, the packs, and everything.
Let's say like for Harmony. It's a chat
history, render, and porcing. [music] It
has a fix for this. You need to let's
see upgrade [music] Python environment.
This fixes a bug for the Harmony server.
So, let's go and fix that. We could just
run that. download the patch for [music]
it. That's open AI Harmony. That'll get
that dates on our system. [music] So,
update that for us. And we also have
some other extension packs in here. It
says error surveying hardware. Yeah,
that's because we don't have CUDA on
here, [music] which is for a video card.
We don't have a video cord on here, so
can't put CUDA on there. And let's see,
Vulcan. I'm not sure what that's for.
Yeah, GPU [music] required. So, we don't
have a GPU. So, that's why that's not
installed as well. You can go into here
and just look at what's all compatible
[music] with your system. We don't have
any of that stuff. So, we can't install
any of the other extensions or
frameworks and all that stuff. So, go
under our hardware and this breaks down
the hardware. You got offload KV cache
the GPU memory if we had it. Let's see.
CPU compatible, but as you can see,
yeah, it says zero GPUs detected. Let's
see. Memory capacity, it breaks that
down. Guard rails. Now, you can modify
your guard rails. You got balance,
relax, off, not recommended. custom.
That's your guard rails right there. So,
you want to make sure you pay attention
to that and don't go off the reservoir
with your gore rails. See, [music] you
can look at it right there. It says
loading models beyond system resource
limits may cause system instability or
freezing. So, if go outside them gore
rails, you know what I'm saying? You
could like kind of mess up your system.
You know what I'm saying? Where it
freezes up and you have to restart your
system or something like that or
whatever. But anyway, let's go under
settings. I just want to show you guys
some of the other settings right there.
You got general settings. Let's say you
want to stay on the stable version or
you want to go to the beta version. I
recommend you stay on the stable
version. That way you don't run into any
issues. You can do check for updates.
There we go. And then show side button
labels. You can put the labels up there.
I like to add that. You can also change
the colors. You see how the colors
changed over there. Presets. So show
configuration dialogue when committing
new fields to the presets. Auto updates.
Auto deletes. use LM Studio HuggyFace
proxy. So that's a proxy open download
pane when starting a new model download.
So that's what that is right there. When
we download it, it popped that [music]
open so we can see the downloads. And
this is the user interface complexity
level which we selected in the beginning
with the different levels. So user,
power user, developer. You can go in and
change that after the fact. And then you
can modify how you want to see the
models. You can look at the full name or
not, the color theme. You can specify
what you want. I'm going leave it on
auto cuz it's fine. Language, you can
modify that. Model defaults, you specify
that. Model maximum, this is for the
guard rails for loading the model. So,
beyond system resource limits, you just
want to keep those guard rails on there.
I recommend you keep those guard rails
on. And then you can reset all this if
you go in and mess around with it. But
just show you some more of the options.
So, under our chat, we got our chat
options. So, you can go in here, make
changes under there as well, like a few
changes. And then we got a developer.
This will show debugging [music]
information. Enable model load
configuration support and extension
packs. Let's see on demand loading all
that [music] good stuff. And the
integration right now we don't have any
integration. And then right here, this
will take you to the LM Studio hub. This
will allow you to go to the
documentation, all that stuff. And then
you can log in to LM Studio Hub. That
way you can get some more information.
You can join the organization, create an
account to publish projects, all that
good stuff. So, that's pretty much it
under the settings. I just wanted to at
least show you guys that. And that's
pretty much it on the interface. [music]
I mean, you can go up here and look at
the menu options. So, we got file,
that's quits, and actually it doesn't
quit. Let me show you guys that right
fast. So, if we hit quits, it's actually
not stopped. As you can see, the
terminal is still running back here in
the background. It's still running.
That's because it is in [music] the
tray. So, we can open it back up. It'll
open it back up. And we are back to it.
So, as you can see, runs in the
background. It'll stay running in the
background if you don't close it or if
you don't quit it. You [music] have to
quit it up here. You have to right click
on it and hit quit LM Studio to quit the
application. Just so you guys know, you
can undo all the normal stuff under edit
view. You [music] can change the view of
it. The windows zoom minimize close and
then under help that'll take you to the
technical documentation and the [music]
LM Studio blog and the website. And down
here you can see what's going on with
your system. So, the amount of RAM and
the CPU usage at the time and then your
account if you're logged into your
account. And then this will bring up
your settings as well. It'll [music] go
back into the settings for you. Now,
let's go back and play around with the
chat. Like I said, it's running on a
virtual machine, so this thing is slow,
but I'm going to go down and just run, I
don't know, just run something. Let's
see. [music] Explain LM Studio. Simple
terms. Let's see what it Let's see what
it does when I say that. Boom. And it
should give us some feedback here. It's
going to think, okay, the user wants me
to explain LM Studio in simple terms.
First, I need to make sure I understand
what LM Studio is. Wait, I'm not
entirely sure. See, and that's one thing
about the these models. They're offline.
So, they don't have all the context. And
so, some of the answers are not going to
be good. You have to connect this thing
to the internet or I'm not sure how to
connect to the internet. I haven't tried
to do that yet. I know how to connect
thlama to the internet. That way you can
get up toate information and it should
be the same way in here. Some kind of
way you can open it up to where you can
connect to the internet as well. I just
haven't done it. But if you look here,
this will show you what's going on with
your system CPU and your memory usage
and swap. It's going through. But that's
all running on your system while this
model is running in the background. Just
wanted you guys to see what's going on.
But as you can see, it's writing out
some information that we asked for. And
it even gives you some information down
here. So 3.52 tokens per second, 921
tokens, 2.53 seconds for to first token
stop reason token file and it thought
for about 2 minutes and 59 seconds to
come up with this results. Right. So
let's quickly try another model. Let me
download one right fast. Actually, let's
go to discover right fast and let's look
for I want to go to yeah Pi Mini. Let's
download that one. Hold that thought.
All right. So, we got four or Pi 4 on
here. So, as you can see, it finished
downloading. So, what we can do is go
back to our chats or we can actually
click down here and load our model and
this will unload or eject the other
model and you'll see it start loading up
and you have the option to switch back
and forth if you want to. But what I'mma
do is not use the chat. What I'm going
to do is show you guys how to use it in
the terminal. Let's say you want to
start scripting. You can leave this
thing running in the background. That's
why it allows it to run in the
background like this. And then we're
going to actually I need to open it back
up. What we're going to do is start the
actual server. I want to show you guys
how to use API server. And actually, you
can do it from here. I just didn't think
about it. But yeah, start server on port
1 2 3 4. That's fine. But I wanted to at
least show you guys in here. So under
developer, we can start our server. And
boom, that'll start our service and
it'll share out all of our models. So
[music] let's go on, close that, and we
can open up our terminal. Let's get a
new window popping and move her to the
middle. And let's go on and zoom in a
little bit for you guys so you guys can
see. I already had a JSON written out.
What we're going to do is use the curl
command and we're going to curl against
our local host. And we're going to
specify our model. So, we're going to
use the fi and I've had this in scripts
for other stuff. So, what is it? Fi dash
let me look over there 4 dash mini and
reasoning is the actual model. And
basically what we're doing is doing a
curl of the local host. The port is 1 2
3 4 and then we're going to chat with
it. We're going to do a completion and
then context type is going to reply in
JSON format and then the model. We
specifying the model and then the
message, the role, user, and then the
content. Just you can put whatever you
want in here. You can ask the question,
whatever. You just have to put it in a
format. And so, let's go down, press
enter. And this may take a while. Ah, we
run into an issue right here. It says
failed to load three reason. Let's see.
Error. Model loading was stopped due to
insufficient system resources. So, let's
open this up right fast. Let's go back
up in here and let's just make sure.
Let's see. Do we have the model road?
Yeah, it's loaded. Let's go back in
here. It's loaded. Let's eject it from
the chat cuz it shouldn't matter. You
don't need it loaded. It'll find it. We
don't need a chat loaded. It'll find the
model that we want and it will load it
with the actual command that we run. So,
let's press enter. Boom. And you can
look over here on Yeah, this is
something I probably should have brought
up so you guys can see, but as you can
see, it's running a prompt. And this is
the information from the it's showing
you basically everything that it's doing
that the model is doing. It's trying to
go through and basically run the model.
So running chat completion on
conversation with one message. So
sampling parameters total prompt token
19 and prompt processing progress. So
zero progress 100 prompt progress 100.
So we should see some results over here
in a second. All right. So it finished.
Scroll back up just so you guys can see
right fast, but it used that model and
this is the results and it returned it
in JSON format. You can use this in your
code. So, as you can see, it says,
"Okay, let's see what the user needs
here." The original message says that
I'm pi a math expert for Microsoft.
[music] And then there's a greeting, but
the problem statement isn't provided.
Maybe there was a technical issue. But
anyway, basically he comes at the end of
it, he responds or it responds, "Hello,
it seems like you're ready to ask a
question, but I didn't receive the
specific problem you need help with.
Could you please share the details of
the problem?" So, this thing is ready to
work. You know what I'm saying? So, you
can add this to your code. You know what
I'm saying? Using the curl command to
gather information that you need and
then put it into your code, whatever
you're trying to do. So, you can call
this thing. And this is the exact same
thing I do with Olama. I actually called
the API. I basically send it information
that I need and it sends me back the
results locally. All this stuff is
locally and [music] organize it or puts
it in a format that I need it and then I
use it how I need to use it. So that's a
pretty cool feature right there. Now let
me talk about a few tips if you're
planning to use LM Studio regularly.
First you need to start with a small
model. I recommend you start with Fi or
Pi Gimma or one of those 3 billion to 7
billion llama models. They're great for
learning and also watch your RAM. You
know what I'm saying? Models load into
RAM. And that's why I ran into that
issue where it said we ran out of RAM.
Especially with higher quantization,
it means higher RAM usage. Also try GPU
acceleration. If you have a Nvidia GPU,
even if it's one of those mobile Nvidia
GPU, like in a laptop, you can still use
that GPU. It's still better than
nothing. [music] You know what I'm
saying? LM Studios can take advantage of
it. And also, if you're planning on
using, I don't know, [music] like a if
you eventually use like LM Studio plus
OAMA setups, then a wired LAN gives you
the smoothest experience. And also,
[music] LM Studios can take up a lot of
disc space, especially once you start
downloading a bunch of models. So, just
be weary of that. If you download
multiple models, it's going to take up a
lot of space. Cuz as you can see, both
of those two models that I downloaded,
they were around 3 gigs a piece. [music]
So, 2.5 for one and 3 gigs for the other
one. So that right there is close to 6
gigs. Just showing you [music] that's
two models taking up a good amount of
space. And then once you start getting
bigger models, they like you can get
some that's 7 gigs, you can get some 14
gigs, which I have some on my llama
server that are like 14 gigs. So it all
depends on what you need it for and all
that good stuff. All right, y'all. So
that's LM Studio running on Ubuntu 24.04
installed from the app [music] image and
loaded with a model and ready to use
with your local API. It's a clean
interface. It's easy to learn and it's a
great stepping stone into working with
local AI [music] models, especially for
folks who don't want everything going
through the cloud. And again, LM Studio
can work alongside Olama if you want to
combine them. But that's something we'll
save for another video. If you found
this helpful, go ahead and hit that like
button, [music] subscribe, drop a
comment telling me what models you're
using or what tools you want me to cover
next. [music] Thanks for watching and as
always, keep learning, keep building,
and of course, keep it techy. Yo, what's
up y'all? Listen, if you've been sitting
there thinking about making a move, let
me tell you, tech is where it's at. I
don't care where you coming from.
[music] Whether you've got a degree, a
GED, or just pure hustle, there's room
for you in this game. You see, [music]
tech is more than just keyboards and
code. It's solving problems, creating
opportunities, and building the future.
You already have what it takes because
tech doesn't care where you start. It
cares where you're willing to go. You
could teach yourself Linux, learn
Python, break into cyber security, or
even launch your own app. And the
resources are out here for free. And
yes, you heard me, free. Now, [music]
yeah, it's going to take effort. You'll
have to grind, but think about this. The
time is going to pass anyway. So, why
not invest it in a skill that'll change
your life? I mean, tech doesn't just pay
the bills. It opens doors to freedom,
[music] stability, and generational
wealth. So, stop doubting yourself,
store small, stay consistent, and keep
building. Because this isn't just a
career, it's a movement. [music] And
guess what? You belong here. So, let's
get it because the future is yours to
build. Keep it tight.