very soon, connect me here www.linkedin.com/in/sachin-saxena-graphic-designer/
@Dharmak-x6y4 күн бұрын
Thank you sir for sharing knowledge.
@codewithkristi3 күн бұрын
Thanks dear !!!!
@kishorev1959 күн бұрын
how to take my own pictures and get the output
@codewithkristi9 күн бұрын
use Python code for it
@kishorev1959 күн бұрын
@@codewithkristi how to use please explain clearly sir
@OptimusKika9 күн бұрын
@@codewithkristi Same doubt how to run the python file can you make a video sir plz i need to show execution tomorrow Can we execute python code in colab
@AnandNerre9 күн бұрын
thanks for sharing knowledge
@codewithkristi9 күн бұрын
your welcome.... connect me here www.linkedin.com/in/sachin-saxena-graphic-designer/
@kanawadenilesh264911 күн бұрын
how can i get super store data for practice ??
@codewithkristi11 күн бұрын
Super Store Data Link: github.com/sachin365123/CSV-files-for-Data-Science-and-Machine-Learning/blob/main/Super%20Store%20Data.xlsx
When you conduct your session what will the time? and which are days of weeks?
@dybydx271218 күн бұрын
Hi Sachin, Nice contents, is videos free? pls let me know
@sachinpriya8821 күн бұрын
65
@sachinpriya8825 күн бұрын
329
@p4rth_valАй бұрын
thx a lot sir
@sachinpriya88Ай бұрын
126
@sachinpriya88Ай бұрын
109
@sachinpriya88Ай бұрын
49
@sachinpriya88Ай бұрын
20
@sachinpriya88Ай бұрын
72
@sachinpriya88Ай бұрын
66
@sachinpriya88Ай бұрын
42
@codewithkristiАй бұрын
32
@shreypatel_Ай бұрын
Are u not creating kidney disease segmentation using U net and other method pls create
@codewithkristiАй бұрын
Will create once get time, I am busy in a project these days....
@asha69112 ай бұрын
Can you answer my question... How can I create automate work flows with my data with azure ai approach???
@codewithkristi2 ай бұрын
Go for custom API in Azure AI
@asha69112 ай бұрын
Hi Nice content tq
@Iamharshilhussain2 ай бұрын
❤❤
@wilsonafolabi13822 ай бұрын
Where did you get the data tho Nice video, subscribed already
@codewithkristi2 ай бұрын
www.linkedin.com/in/sachin-saxena-graphic-designer/ connect here will provide data link
@aryankaushik932 ай бұрын
bro i am not getting exp3
@aryankaushik932 ай бұрын
Bro where did you get the dataset?
@codewithkristi2 ай бұрын
collected from near by hospital
@jannatulferdausjannatulfer-j1t3 ай бұрын
thanks sir but could not find weights_seg.hdf5 file from github but you work and code with this file could you please share this file.
@codewithkristi3 ай бұрын
sure connect me at www.linkedin.com/in/sachin-saxena-graphic-designer/
@nimrashabbir92964 ай бұрын
Hey, i need the files urgently. Can u provide link?
@mayankbungla4 ай бұрын
These sites are not free
@redpillintaker57774 ай бұрын
Hi i require the missing Google Drive files asap for my research project.Your drive link isn't working.
@codewithkristi4 ай бұрын
www.linkedin.com/in/sachin-saxena-graphic-designer/ reach me out here
@Tausif1-9-9-64 ай бұрын
Many Many thanks for this video , I am currently working on this project for dissertation
@codewithkristi4 ай бұрын
Thanks dear !!! connect me here..... www.linkedin.com/in/sachin-saxena-graphic-designer/
@Tausif1-9-9-64 ай бұрын
@@codewithkristi Sir please tell me can I run this dataset on my Acer Predator with 6 GB RTX 3060 laptop
@ayseyavas50605 ай бұрын
Can you share all the jupyter notebook codes at the end that show the detection?
@codewithkristi5 ай бұрын
sure will do
@TestKappa-vz4dz5 ай бұрын
Can you share all the jupyter notebook codes at the end that show the detection?
@akashsalmuthe98466 ай бұрын
Thanks for a great Video!! Any reference or documentation for how to train the custom model in DIS using python SDK?
@codewithkristi6 ай бұрын
sure will upload very soon....
@codewithkristi6 ай бұрын
connect here www.linkedin.com/in/sachin-saxena-graphic-designer/
@Ishi26646 ай бұрын
What do we have to do after creating it and adding keys?
@codewithkristi6 ай бұрын
You are providing API access to your AZURE subscription to use resources.....
@ggraj6 ай бұрын
Hi - excellent video! I'm trying to recreate, but I can't see the weights.hdf5 file on your github. What model architecture did you train to get such high performance for the binary classification? My accuracy is only around 65% when following the exact notebook :) Edit: My mistake, I think. I ran with too few epochs. Anyone that's following along, ensure you change the epochs=1 in the fit line :)
@codewithkristi6 ай бұрын
drive.google.com/drive/folders/16UbSpyMp61epoFmIvxR3O02C0gd3DeJi?usp=sharing Access this folder, You will find
@@codewithkristi previously I ran this program the model ran fine. but now it shows error for the same code. /content/sign-language--1 Ultralytics YOLOv8.0.230 🚀 Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB) engine/trainer: task=detect, mode=train, model=/content/yolov8s.pt, data=/content/sign-language--1/data.yaml, epochs=25, time=None, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 116, in __init__ self.data = check_det_dataset(self.args.data) File "/usr/local/lib/python3.10/dist-packages/ultralytics/data/utils.py", line 312, in check_det_dataset raise FileNotFoundError(m) FileNotFoundError: Dataset '/content/sign-language--1/data.yaml' images not found ⚠, missing path '/content/sign-language--1/sign-language--1/valid/images' Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/yolo", line 8, in <module> sys.exit(entrypoint()) File "/usr/local/lib/python3.10/dist-packages/ultralytics/cfg/__init__.py", line 448, in entrypoint getattr(model, mode)(**overrides) # default args from model File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 351, in train self.trainer = (trainer or self._smart_load('trainer'))(overrides=args, _callbacks=self.callbacks) File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 120, in __init__ raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e RuntimeError: Dataset '/content/sign-language--1/data.yaml' error ❌ Dataset '/content/sign-language--1/data.yaml' images not found ⚠, missing path '/content/sign-language--1/sign-language--1/valid/images' Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml' this is the error i am getting. what to do?
@kirthivershaM7 ай бұрын
@@awhbxrry6309 yes I'm also getting this error how to rectify this?
@NLP__4 ай бұрын
@@kirthivershaM copy training and valid path and paste it correctly in data.yaml file try this
@lakshmisoumya477211 ай бұрын
Hi sir how do we do preprocessing of kidney tumor ct images?