In "Building Microservices with Node, Docker and Nginx pt 3 - Connecting the Microservices" I show you how to use Docker compose and Nginx to connect your fleet of Microservices Here is the code: github.com/fCh...
Пікірлер: 90
@UrGuru4 жыл бұрын
I watched all the three videos in the series and it really cemented my understanding about docker, microservices and nginix thank you
@colinroemer50875 жыл бұрын
This series was fantastic. Thank you again. Would love another video expanding on this. Maybe deploying microservices and handling scalability? Possibly adding in PM2 or dealing with load balancing? Your videos have really helped me understand certain concepts more clearly. Keep them coming!!!
@FredrikChristenson5 жыл бұрын
Glad to hear it m8. I have scheduled a video for showing the basic idea behind how to split a Monolith in to Microservices and how to perform a migration that builds on this series. I am working on a Java series right now but after that I am putting together a video on how to handle service discovery. We will cover as much as possible in due time but its good to know that MicroServices is not a very common architecture in the real world. Have a great day and thank you so much for watching!
@neilparsonage2694 жыл бұрын
By far the best introduction to microservices I've found - Awesome ! Thanks for taking the time !
@Treegrower5 жыл бұрын
9:50 this part of the video blew my mind! I definitely will be using Nginx in my next projects. Very informative video :)
@mariascharin19783 жыл бұрын
Super helpful! I love your enthusiasm! Thank you!
@RaghavaIndra4 жыл бұрын
Wow! Thanks a lot for all the hard work you did to explain us the concepts. I really appreciate it.
@republic20333 жыл бұрын
Thank you Fredrik that was a very informative serie, I really liked that you talked about your experience and the tricky sides of microservices. cheers
@rahulek9144 жыл бұрын
Excellent 3 part series. Thanks a lot.
@MrAmarender23 жыл бұрын
That's great series. You explained it clearly.
@Raiyan6786 жыл бұрын
Wow. This helped me a lot. I have to make like a Twitter clone for a cloud computing class and i stumbled across your while searching for tutorials. Amazing. Keep up the good work
@FredrikChristenson6 жыл бұрын
Hi Raiyan! Thank you, I am very happy the video helped you out! Have a great day and thank you so much for watching!
@inigoreiriz12995 жыл бұрын
Great content Fredrik, true inspiration for many programmers!
@FredrikChristenson5 жыл бұрын
Glad to hear that my hobby is useful m8. Have a great day and thank you so much for watching!
@douglashenri50176 жыл бұрын
Dude, thanks for putting this up, you really enlightened me.
@FredrikChristenson6 жыл бұрын
Hi Douglas! Glad to hear it m8, I hope it cleared some things up about Microservices, it is a tricky subject. Have a great day and thank you so much for watching!
@phongkien4 жыл бұрын
Thanks for your detailed explanation.
@lh9993 жыл бұрын
Great explanation!
@kriskrawiec5513 Жыл бұрын
great series, thank you :)
@sarjibkarki12584 жыл бұрын
Great tutorial. Thanks a lot.
@shensean17843 жыл бұрын
It is stil al great content today.
@daanwijns65684 жыл бұрын
Hands down, the best explanation. Amazing tutorial! Thx a lot!
@azizutkuozdemir5 жыл бұрын
it is pretty good demonstration , thanks for sharing with us .
@lehoangnam14004 жыл бұрын
awesome videos. Tks u so much ~!
@JoshRenton916 жыл бұрын
These videos are amazing, thank you so much! I'm building a small system that I want to split into separate pieces to ensure close to real time performance. I was considering multiple microservices running Node, but your warnings have sunk in. Would you recommend something like child processes instead?
@FredrikChristenson6 жыл бұрын
Hi Josh! Realtime performance is something you will get using technologies like websockets and say Redis for really fast transfers. Child processes will not be relevant to you unless you are trying to max out the processing power for your application and that should only be needed when you have either tons of users or very heavy computations to deal with. That is what we call a scaling issue but it has nothing to do with keeping things in real realtime. My guess is that you don't have that problem in the beginning so making a standard monolith with realtime technologies will most likely be good enough. Have a great day and thank you so much for watching!
@JoshRenton916 жыл бұрын
@@FredrikChristenson Thank you for your reply Fredrik. I think wasn't quite clear enough in my question. Yes, indeed we will be using websockets, and most likely Redis as a datastore (but implementation details are still being worked out). I will be building an algorithm (let's call it a Task A) that checks realtime data coming in via WS at high frequency. It may then fire off REST request(s) / do other things in the reaction to what it has observed in that data. Each instance of Task A may end up running multiple times / second. Theoretically unlimited simultaneously instances of Task A may be deployed varying parameters depending to user requirements. (of course not unlimited in reality, but there is no obvious limit from the number a user might desire). Simultaneously the system will be running other Tasks like: - Recording the history for this - Maintaining some kind of state in case of crash - Running Express (of course) - "Task B": Fetching a range of data via REST APIs, and manipulating it before shipping it off elsewhere. - Ensuring our UI remains silky smooth and that instructions sent to the system from the UI are received very promptly. It is mission critical that Task A reacts to incoming data / user input close to real time. Despite Node's asynchronous nature I still am concerned that trying to run so many Tasks on a single thread will produce latency that might not be obvious at first but, as we add more tasks types (Tasks C, D, E....) and introduce more functionality, will cross the threshold into "too slow". As such I decided it would be beneficial to split the program across a number of a Node processes, each handling their own task - which sounds a lot like microservices - e.g. a WebsocketListener that just listens to WS (we'll be listening to a number of channels from a number of APIs) which can be requested for data by a TaskExecutor that runs in a separate process etc. So I imagined that, without microservices, this could be a one beefy machine running a fleet of node processes, controlled by a single master node, that talk to each other when they need to. Does this make sense or do you feel I've missed something glaringly obvious? And I'm sorry for the essay. Josh
@FredrikChristenson6 жыл бұрын
Then I would say that you are on the right track m8 but do yourself the favour of benchmarking with a single process first. If you are running network calls and low effort computations things should be ok but if you find that you are indeed in need of maxing out your cores I would start by looking at this: pm2.keymetrics.io/docs/usage/cluster-mode pm2 has been a great help to me in the past and if it turns out that you need to scale even further I would keep your service as a monolith and then scale that instance behind a loadbalancer where each container runs with pm2. I would only use microservices as a last resort if you have some really heavy processes and even then I would start by moving that code in to a service and run that in isolation before I split the entire monolith. Have a great day and thank you so much for watching!
@JoshRenton916 жыл бұрын
@@FredrikChristenson Thank you, your insight is immensely helpful.
@andreigatej67043 жыл бұрын
@@JoshRenton91 Hi, what approach did you end up using? It seems to be a very interesting problem. Thanks!
@malikbrahimi75043 жыл бұрын
Awesome series! What happens when you start scaling your orchestration, for example each microservice has its own db and replicas? Is http requesting on the network the best way to communicate or message brokers?
@rizkiheryandi57596 жыл бұрын
I think, It's better if you give the example code link, maybe in GitHub It will help a lot of newbie people which watching your great tutorial video, and more GitHub follower in your account 😆 But it's just my opinion, everything is up to you, you are the project and video owner 😆😀
@FredrikChristenson6 жыл бұрын
I have updated the video description with a link now m8, thank you for the suggestion. Have a great day and thank you so much for watching!
@supersoi14 жыл бұрын
Great series !!! +1 kudos
@adistutorials53996 жыл бұрын
Awesome video Fredrik!
@FredrikChristenson6 жыл бұрын
Glad you enjoyed it m8! Have a great day and thank you so much for watching!
@LovepreetSingh-ez5cq6 жыл бұрын
Thanks for great videos, really liked these. Could you please share some useful reading material you think would be good for microservices with Nodejs?
@FredrikChristenson6 жыл бұрын
Glad to hear that you enjoyed them m8! I am sorry to say that I can't share reading materials on this topic as the knowledge I have about API design and Microservices come from working with them professionally and a lot of research into different blogs and tech talks from various companies. I am however working on another series following up on this one where we will kick it up a notch and build a architecture that handles service discovery by using a message broker that connects the services. Have a great day and thank you so much for watching!
@jasonma19045 жыл бұрын
Awesome tutorial, thanks for everything you done. Like your videos and they keep me learning. For this video series I only have one question. What is the sequence of running all of these? Running all dockers first and then the Nginx server? You explain everything but not how to finally run these. Cheers.
@jasonma19045 жыл бұрын
And another question, is the "docker-compose up" is the command to run all the dockers?
@FredrikChristenson5 жыл бұрын
Yes, in the compose file we declare the dependencies to each of the containers so docker knows that nginx needs to start first and then start the other services. This isn't super important since there is no hard connection between nginx and the other containers but it would be really important if say we had a service that needed another service the second it started. Have a great day and thank you so much for watching!
@shivampawar50575 жыл бұрын
Superb tutorial! Is this api ready for production or any further cofiguration is still need?
@usama59756 жыл бұрын
Amazing video! I just have one question regarding nginx conf. You have exposed the ports for search, books and videos as 3001, 3002 and 3003 respectively. But in nginx.conf you set the proxy_pass for all services on port 3000. Isn't 3000 the internal port running node on each docker service?
@FredrikChristenson6 жыл бұрын
Glad to hear you like the video m8! Correct m8 but since they are all running on their own docker network they can connect to each other, the 3001 - 3003 ports are the ports I expose to my own laptop so I could show the difference between connecting to the container using nginx and just calling the container directly. Have a great day and thank you so much for watching!
@usama59756 жыл бұрын
Oh great, got it. Thanks :)
@theoneandonly63166 жыл бұрын
Really well put together, and very nicely explained ... Helped me a lot. A question I have a reactjs frontend ( created with create-react-app) and nodejs backend for db and api. Thats it. Do you think for such kind of requirement I should use uService arch ? or its just overkill. I find this structure is really usefull specially it scales
@FredrikChristenson6 жыл бұрын
Hi m8! Thank you, I am glad you liked the video! It is major overkill m8. The best way to scale an application is to think about it in steps. Just like any project it should start small and simple. If you are building a road for a small town you start by building a dirt road that fits that town. If the town grows to a city you build a highway but if you build a highway for a small town it will be too much too early and the cost of the road will be much higher than the value of the town. Start by building a simple monolithic application and if the project you have grows really big you start moving to a more advanced architecture. Have a great day and thank you so much for watching!
@dimitargetsov96904 жыл бұрын
PURE GOLD!!!!!
@TechByRyan6 жыл бұрын
Great series man! So is nginx just acting as an api gateway that routes requests to the correct service? Is this better than having a api microservice that makes requests to the correct service and returns results?
@FredrikChristenson6 жыл бұрын
Hi m8! You understood the architecture exactly m8, this version is a fairly simple version but something we will touch on in a coming video is how to manage service discovery more gracefully. It is a possible strategy to find services by adding domain names directly in to the service and it will work for small systems but it brings other issues when you want cross cutting functionality like say validation or creating isolated networks of services. Other issues may be caused by scaling, if you have multiple instances of a service you will have to have some form of load balancer to distribute the calls to the service and that gets really messy if you need to handle that in the calling service. Have a great day and thank you so much for watching!
@rja4216 жыл бұрын
Finding this was like striking gold. I already have a site in docker containers with an nginx reverse proxy but the site I am just finishing has an api and I didn't want to serve the static files from express.
@FredrikChristenson6 жыл бұрын
Hi m8! Thank you, I am glad you found the video useful! Have a great day and thank you so much for watching!
@bazumbaz5 жыл бұрын
Hi Fredrik, great video series! When running nginx, is it not good to not allow any other connections to the actual services on their own port? If so, how do you do that?
@FredrikChristenson5 жыл бұрын
These services are running on my local machine so if I wanted to "isolate" them I would simply run them without exposing their ports to my machine, that way they will only run on the internal network Docker creates for them and as long as the only exposed port is nginx there is no other way to connect to them. In theory it is possible to compromise one service instance and use that to connect to the rest of the Docker network but that is a much more advanced topic an likely not something you will have to account for as it is well in to the senior level programming skills to secure a network at scale. Have a great day and thank you so much for watching!
@egor37256 жыл бұрын
thanks for the video, however, you made a mistake in client code: your fetch call should not be surrounded by "try-catch", you have to either handle your errors in fetch's chain or create another async function and explicitly call it within "try-catch" with await operator, hope you got what I mean ** 49 line (should be explicitly returned in your case it's undefined :) ) 64 line **
@scalarious2865 жыл бұрын
Hey this is very helpful. Can you do a video on having microservices on different server?
@FredrikChristenson5 жыл бұрын
I have a few videos planned where we will walk through deployment so different servers and what impact different approaches has to our application and our daily workflow. Have a great day and thank you so much for watching!
@matthewanderson72366 жыл бұрын
It looks like you're making ajax calls in the web service in public/index.html but you also seem to have app.js and server.js files in the web service as well. What purpose does server.js and app.js serve in the web service? What do your request handlers look like in app.js? Great series btw! Any way you can make your code available on GitHub?
@FredrikChristenson6 жыл бұрын
Hi Matt! Great question, I think this video will be useful to explain why I use an app.js file and a server.js file: kzbin.info/www/bejne/gJe5n5xvrLqrsNE I can share the code on Github, I will get on that as soon as I can, hope you have a great day!
@FredrikChristenson6 жыл бұрын
You can find the code here m8: github.com/fChristenson/microservices-example
@matthewanderson72366 жыл бұрын
Thanks, man!
@Daorcs5 жыл бұрын
Explanation wise good, getting started with microservices.. not so good. I feel there's some key points mentioned but somethings are missing, I tried to do what you're done (similar config) for school project but to no avail, wasn't expecting much out of youtube videos, anyway, thanks.
@hugojose124 жыл бұрын
Fucking amazing.
@tian_wijaya5 жыл бұрын
hey, +Frederik Christenson thanks a lot of your explanation videos about microservices, make me get lot of information the architecture. BTW would you mind to create the videos about deployment process of this microservice you've created till production ready? Actually I still confuse how to deploy the application that I created along with Docker things inside hope you consider this
@FredrikChristenson5 жыл бұрын
No problem m8, there are a few ways you can do this depending on what host you are using. Sure m8, I can put together a few videos on deployment, I am thinking we will start with going through the different levels of deployment commonly used and end with a video on how to do it with MicroServices as that is probably one of the more complicated deployments to understand. Have a great day and thank you so much for watching!
@tian_wijaya5 жыл бұрын
@@FredrikChristenson great! thank you for your nice response. I'm waiting for your next series for deployment. I've been researching for deployment microservice using openshift. Looks so simple, but still need another options & more explanations
@manuellopez12344 жыл бұрын
Why dont you need to use the allow header in nginx?
@vinaykornapalli79774 жыл бұрын
You are god. Thank you
@Arm9246 жыл бұрын
Do you use NATS for comunicate between micriservices ?
@FredrikChristenson6 жыл бұрын
In this video the services are on the same network so they are just talking through http with nginx acting as the gateway but I have prepped a video where I will show how to do this with a message queue instead which is a bit more scalable than this. Have a great day and thank you so much for watching!
@amitk02775 жыл бұрын
@@FredrikChristenson Where is that video ? Link ?
@lahiruudayanga59894 жыл бұрын
@@FredrikChristenson Amazing video bro ! Did u upload the video of the message queue ?
@lahiruudayanga59894 жыл бұрын
@@FredrikChristenson I found the link in github but it says the video is private.
@kahyalar6 жыл бұрын
Thank you for this informative video series. Quick question: Don't we need a volume for MongoDB too?
@FredrikChristenson6 жыл бұрын
No worries m8! If you want to persist the data beyond the lifetime of the container you should use a volume but since I am doing local development with some random test data I didn't feel like I needed to persist it longer than the container. Have a great day and thank you so much for watching!
@iRedee5 жыл бұрын
Can someone explain why as Fredrik states "Nodejs is shitty at i/o"? From what I understand nodejs is an 'I/O bound' language which would mean nodejs , because it none blocking, is preferrable for i/o because it scales very well for io tasks, as opposed to other languages.
@FredrikChristenson5 жыл бұрын
That is correct m8, for async work Node performs very well but if you are going to do something heavy and synchronous you are likely going to cause issues for your application. There is currently a experimental attempt to solve this problem and I hope that soon it will be a non issue. Have a great day and thank you so much for watching!
@iRedee5 жыл бұрын
@@FredrikChristenson Aha, totally get your point now. Thanks for the update and videos super helpful. 👌🏾
@jwbonnett6 жыл бұрын
I know this example is just to teach, although I'd like to ask how come you only have one database instance? I thought you'd normally have a instance per service for failover via e.g. database replication.
@FredrikChristenson6 жыл бұрын
Good question, that is in fact only true if you need to use such an approach. For a small scale system running in the cloud I don't see much value in having one database per application instance if you don't have an issue with load but yes the architecture most people associate with Microservices has one service to one database. There are also situations where you want one instance for all your services, Neo4j is a good example of when the value you get comes from having a big graph of your domain. These are ofc just my thoughts. Have a great day and thank you so much for watching!
@jwbonnett6 жыл бұрын
Thank you, I was thinking more about availability as I believe that microservices are about high availability / redundancy e.g. if a service goes down the application still works, although in this case having one database could mean the application could become unusable if the database goes down. But they are my thoughts too. I really liked the pace of the videos, I'd love to see videos covering problems like database transactions over multiple services and service discovery.
@FredrikChristenson6 жыл бұрын
Oh these are tricky subjects indeed, let me have a think about these great suggestions. Have a great day and thank you so much for watching!
@atmospheric_b5 жыл бұрын
i can`t understand . Why do you expose 3001, 3002, 3003 ports in docker compose?
@FredrikChristenson5 жыл бұрын
For testing purposes, we would not do this in a prod environment. Say that I call the nginx instance and my request fails, where did the error occur, in nginx or in my service? By exposing the port I can quickly call the service directly and verify where the error is. Have a great day and thank you so much for watching!
@atmospheric_b5 жыл бұрын
@@FredrikChristenson thanks man! You too
@zy4208061435 жыл бұрын
Very good video! I still have one question, how does books:3000 map to the book container in ngnix?
@FredrikChristenson5 жыл бұрын
When we run our containers with docker-compose they are added to their own network so they can connect to each other and docker will add the hostname for each of the containers on the network in to each container so when we say "books:3000" and that will be resolved in to the IP address of the books container. The ":3000" is the port we have started our sever on and also exposed from the container so the external containers can connect to it, we could have exposed port 80 as well and then we wouldn't need to state ":3000" or ":80" because port 80 is the default port of a server but for some reasons I decided to simply run the server on port 3000. Have a great day and thank you so much for watching!
@dimitargetsov96904 жыл бұрын
could somebody explain to me why there is "return " in : ? Many thanks in beforehand!
@mustafakursun58704 жыл бұрын
There are good answers to this question on StackOverflow, you may find useful: stackoverflow.com/a/5196138/9577714
@poseidoncoder3103 жыл бұрын
nvm, microservices look like way more trouble than they are worth