✅ Learn how to build robust and scalable software architecture: arjan.codes/checklist.
@mwillia4Күн бұрын
We use gRPC because of performance intra cluster and ‘near real time replication’ - there are a few videos of sending 10,000 messages in like 2 seconds with gRPC - for our testing we did cloud to cloud ‘drag racing’ of same message in gRPC against rest - using Python and the hands down winner was gRPC Another reason for our project is we are also making an sdk and are supporting multiple languages and wanted that ‘hard contract’ of the proto It was a learning curve for sure But now we talking about good enough, long term, growth of the app- because we have more tools it drives us to better understand the business needs…. Many times, the business ‘right’ does NOT mean the ‘best’ technology GRPC, rest, graphql…. are tools, not hills to die upon :-) Great overview and look forward to every video!
@makempe2 күн бұрын
You can get the type annotations by generating the .pyi from the proto as well.
@adebayoadenekan8106Күн бұрын
I use grpclib for my grpc projects with Pythona and it helps generate the .pyi file with the type annotations. For JavaScript/TypeScript I use either nice-grpc & ts-proto or malijs for grpc. I hardly use the main grpc libraries.
@TheJPhutchinsКүн бұрын
I generate the code in the Python build (to avoid checking in the generated code), and have the build system require mypy-protobuf. Then, in the build script - e.g. “hatch_build.py” - make sure to provide protoc the argument “--mypy_out=.” to generate the typed PYI files. Im not offering a vote of confidence - I’ve found the entire protobuf ecosystem to be crusty, and likely overkill unless you really know what you’re optimizing for.
@agb25572 күн бұрын
so glad you've done a video using Go! I'm a python dev but have been using Go in my spare time and I'm really enjoying it
@mwillia4Күн бұрын
Amen… Go is our second language…and every time I’m in it…I love it!!! I don’t find my self ‘defaulting’ to it yet though
@raulmatiasgallardo2 күн бұрын
For the where to store the proto files question, one solution can be to create a client in the same repo and expose as a library so users of your service can easily use it. Requires more work but makes the life of everyone a lot easier.
@yifeiren80042 күн бұрын
@raulmatiasgallardo it is actually worse to use. Because in REST you know full schema but in gRPC, not always the case. gRPC in fact, is better when you want to give both client and server.
@vlplbl852 күн бұрын
I used gRPC to send pictures from client app to image recognition model hosted on another machine and it was 10 times faster than REST for such big files like pictures
@yifeiren80042 күн бұрын
Then you must have done it wrong. gRPC doesn't offer meaningful performance improvement, it only gives better abstract when programming
@bigmacbeta2 күн бұрын
I was under the impression gRPC uses protobuf, which uses binary sterilization . Delivering smaller payload sizes over say json and hence performance, efficiency gains.
@yifeiren80042 күн бұрын
@bigmacbeta you can use that in Rest too.
@yifeiren80042 күн бұрын
@bigmacbeta and even that, it is not that efficient. Basically the difference is protobuf doesn't contain the field names and that's it....
@ShadowManceri2 күн бұрын
JSON serialization is going to be your biggest problem. And likely the cause of any slowdowns you might encounter. JSON is not very optimal for binary data. There is no binary or blob data type in JSON, only strings. Strings in JSON has to be UTF-8, so no binary data. Not only you need to now encode the binary data to make it valid UTF-8, you also have to encode that again into JSON. BSON would actually support byte arrays and would be more suitable for such use case. This has really nothing to do with REST. Also REST does not state that your payload must be JSON. You could in fact just serve the image, tar or zip etc. Or even use the protobuf with REST just like gRPC is doing. But I'm not sure if that is great idea given the use case of REST, kind of defeats the point.
@sainsay2 күн бұрын
We use gRPC in our multi language deployment and when we switched to it we immediately knew it was the right choice. The continuous connection you set up is amazing and super easy to use.
@ihordrahushchak54392 күн бұрын
I have found a cool solution to have a common application interface between two applications using REST. I did it with FastAPI and a NextJS application. I wrote a CLI using typer, this command would take the FastAPI application, generate a swagger file, inject the swagger file into the NextJS application project then use the openapi-generator-cli package to generate an SDK for the NextJS application. And here you go, you just define endpoints using FastAPI endpoints with Models and you get an SDK with functions to access the endpoints and interfaces how to communicate.
@diegol_1162 күн бұрын
Happy to see Go code in a video by Arjan :)e Go code in a video by Arjan :)
@be1tubeКүн бұрын
NCBI has been successfully using gRPC in its backends for several years. There are a number of high throughput, low latency use cases (the Datasets product comes to mind but I've seen it in others.) And the compiled contract helps keep multiple teams in sync and make API transitions explicit. But it's not a silver bullet. APIs NCBI exposes to the outside world are usually "REST."
@madskaddie2 күн бұрын
(restafarian enters the room) gRPC is not an REST replacement but maybe a http+json replacement. to implement rest, it requires so many things. Some I consider important: - url or similar: you don't have a standard way to locate a method call - semantics. no standard semantics for retrieving, independent state change. in grpc, everything is the equivalent of a post, so middleware mostly impossible. (no url, also no caching middleware) - the idl is the thing that could be use to deliver form data, but only static (shape), not predefined values and similiar
@yickysan2 күн бұрын
Learning about Grpc for the first time today.
@michiscifi20 сағат бұрын
gRPC sounds very promising for efficient communication between services, especially with features like streaming and strong typing. Have you considered how OTP's native mechanisms, such as GenServer, could be leveraged for similar goals? In certain cases, it might offer even tighter integration and fault-tolerance within the BEAM ecosystem. I'd love to hear your thoughts on where these approaches might complement or diverge.
@varshard02 күн бұрын
One of our team had an issue because of gRPC sticky connection. The team didn't has a good way to load balancing gRPC connections from the client side and kept using the same connection to send multiple urany calls to the same service. So, they kept spamming requests to the same service instance, even though the service had spawned a new instance to help handle extra loads. They unnecessarily overloaded the service and degraded their own service as well.
@Rcls012 күн бұрын
Not a monolith. A monorepo. The way Google wants to maintain their stuff. You see a lot of things in gRPC that indicates that Google style of doing things. I also had to search for a package that compiled proto to typescript and wasn't a horrible commonjs mess that the default protoc tool outputs. gRPC is really not made for browser apps.
@adebayoadenekan8106Күн бұрын
I use nice-grpc & ts-proto for my TypeScript project. ts-proto will generate the necessary TypeScript code from your .proto files and you can specify the js version you want too.
@adrianabreu15652 күн бұрын
Would love to see the in - depth libraries videos but with go ones :)
@АндрейМихайлов-о6я3цКүн бұрын
how did you know that i'm searching the way to write interfaces right now? :)
@stainomatic2 күн бұрын
You can use both depending on your needs. It’s a balance between development complexity and speed
@dominicbue14047 сағат бұрын
so with OpenAPI (former swagger API) you can define a contract for a REST interface with a yaml file an also generate client or server code. There are also clients which can show those yaml files very readable and also wich clickable demos.
@SVGc199311 сағат бұрын
14:04 - Git Submodules!
@MrGiovajo2 күн бұрын
10:08 What do you mean? Almost 100% of all browsers and all web servers support HTTP/2.
@galuszkak2 күн бұрын
@MrGiovajo - gunicorn - doesn't support it, uvicorn - doesn't support it etc. There is a lot of webservers that don't support HTTP/2.
@florentcastelliКүн бұрын
While browsers support HTTP2, there's no way to write custom HTTP2 with trailing headers as required by gRPC, and allowed by the standard. So instead, people use a special proxy that will accept requests in a different way and then forward it as a proper request to your gRPC server. In general, gRPC goes beyond HTTP2 and I've seen some using it over many types of transport.
@MrGiovajoКүн бұрын
@ Thanks for the clarification!
@MrEmbrance2 күн бұрын
another on point video. thanks
@ArjanCodes2 күн бұрын
You’re welcome!
@AmirHosseinHonardust2 күн бұрын
We used gRPC for our internal Go microservices. Then we moved to REST and our memory allocations cut to half. Why? Because the structs created by gRPC did not fit our need. So we needed to remap and reallocate each time we passed a struct. Also we did not need streaming. I highly recommend you avoid gRPC or if you think you are an exception, make real-world benchmarks to prove that you are actually seeing any performance improvements.
@MrLotrus2 күн бұрын
Maybe you used pointers more than it was needed?
@AmirHosseinHonardust2 күн бұрын
@MrLotrus well, if you use gRPC, you don't have much choice, since it adds pointer to every single thing. It contains mutexes as well. Which I'm not sure why. But there are other things. For example, the structure of the structs usually don't compile to a very idiomatic Go. Their name is clearly C++ inspired. You also have types that don't have clear equivalent in gRPC. For example UUIDs. So if you use a uuid all over your application, you either have to define a struct in gRPC just for the uuid, or you have to parse strings into UUIDs to make sure that the string that had been passed by is a valid UUID. But the whole point of performance for gRPC is that it has serializes and deserializes better than JSON. But the thing is that you only rip that benefit if the struct that you serializing/deserializing is exactly the same struct that was generated by the Protobuf compiler. When it doesn't, you have to convert and map every field. Which, not only takes a long time, it has a lot of branching and allocations all over it. Doing http json with something like sonic which rips the benefits of SIMD and fiber which tries its best to avoid allocations does wonders. And it doesn't force us to write mappers all the time which means that we have time to better optimize our algorithms as well. Which is a magnitude more effective than the benefits the non-self-describing binary data format that loves putting pointers and mutexes everywhere can do. Although to your point, we have tested the most simple case of gRPC vs the fiber+sonic. It was not as horrible as the real world one, just 10% less allocations, but around 200% slower anyways.
@AmirHosseinHonardust2 күн бұрын
I have to mention that I'm not sure if it was 200% or 400%. And the compiler version sometimes matters. Although newer versions don't always mean "better".
@eboyd532 күн бұрын
There are applications that use both gRPC and REST; it all depends on the needs of the application.
@nicksmith45072 күн бұрын
I am old enought to remember when SOAP was the shiny new thing.
@jesulobajohn8468Күн бұрын
You can also use buf, it's way easier
@MrLotrus2 күн бұрын
grpc lib allows to generate pyi files from proto
@adebayoadenekan8106Күн бұрын
That's what I use too.
@Kaelovision2 күн бұрын
A Go course please ( advanced topics only ).
@asksearchknock2 күн бұрын
I had the recent misfortune of having to use a gRPC application and whilst it does have a great deal of precision it was really awful to use. I would say that gRPC ensures absolute compatibility between two different systems and is somewhat language agnostic but for simple tasks is way too overkill. For most things, just like to send a simple message and get a simple reply, and not have to generate an entire communication schema. But when you're dealing with massive, massive systems that are completely interdependent, it can be a useful thing to know precisely what you're getting.
@tejeshkaliki2 күн бұрын
My favorite answer for what g stands for in grpc is the recursive explanation, where g stands for gRPC 😂 gRPC: Remote Procedure Call
@chudchadanstud2 күн бұрын
It stands for Google. We all know it. It's why Go is called Go. It's not a well hidden secret
@marcysaltyКүн бұрын
Anyone tried with Rust?
@UNgineering2 күн бұрын
14:10 you can keep all the proto files in a repo and include it as git submodule in each of your service repos. but my preference is monorepo approach.
@johanmartijn2 күн бұрын
In Python gRPC is terrible and I would prefer REST
@bigmacbeta2 күн бұрын
Why is gRPC terrible in python?
@prashanthb65212 күн бұрын
Please elaborate.
@johanmartijn2 күн бұрын
@@bigmacbeta with REST you can make simple reqeusts, while gRPC uses elaborate generated code
@adebayoadenekan8106Күн бұрын
Actually grpc allows two methods of using .proto files. Dynamically or statically. Dynamically: codes are not generated. Statically: the codes are generated for you. I use a library called grpclib when working with grpc in Python instead of the official grpc. I combine grpc and rest. I use rest for the api-gateway while I use grpc for service communication.
@alexandrodisla62852 күн бұрын
Only available to google chrome.
@drendelous2 күн бұрын
why do you all use thumbnails with pointing your finger...
@masked000003 сағат бұрын
feedback :: Your videos are getting too long - great content but has to much of the talking, keep it up