Implementing YOLOv8 for Object Detection

When it comes to spotting and tallying up vehicles, here’s how we do it in three simple steps:

Step 1: Importing Necessary Libraries

All the necessary that will be used for our model are imported.

Python
import ultralytics
import supervision
import torch
import cv2
from collections import defaultdict
import supervision as sv
from ultralytics import YOLO

Step 2: Loading the pretrained model

By using this code we load the YOLOv8 (You Only Look Once version 8) model from the ultralytics library to perform object detection on a video file (d.mp4).

Here is a detailed explanation of each step and argument in the code:

Python
# Load a pretrained YOLOv8n model
model = YOLO('yolov8n.pt')

# Run inference on 'bus.jpg' with arguments
model.predict(source="d.mp4", save=True, imgsz=320, conf=0.5)

Step 3: Tracking the Model

This code use the YOLOv8 model to include object tracking on a video file (d.mp4). Here’s a detailed explanation of each step and the parameters used in the track method:

Python
# Configure the tracking parameters and run the tracker
model = YOLO('yolov8n.pt')

results = model.track(source="d.mp4",conf=0.3, iou=0.5, save=True, tracker="bytetrack.yaml")

Step 4: Line Crossing Detection in Video using ByteTrack

The code loads a YOLOv8 model to track objects in a video (d.mp4) and detects when they cross a defined line. It captures and processes each frame, annotating tracked objects and counting those that cross the line. The annotated video with crossing counts is saved as output_single_line.mp4.

Python
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')

# Set up video capture
cap = cv2.VideoCapture("d.mp4")

# Define the line coordinates
START = sv.Point(182, 254)
END = sv.Point(462, 254)

# Store the track history
track_history = defaultdict(lambda: [])

# Create a dictionary to keep track of objects that have crossed the line
crossed_objects = {}

# Open a video sink for the output video
video_info = sv.VideoInfo.from_video_path("d.mp4")
with sv.VideoSink("output_single_line.mp4", video_info) as sink:
    
    while cap.isOpened():
        success, frame = cap.read()

        if success:
            # Run YOLOv8 tracking on the frame, persisting tracks between frames
            results = model.track(frame, classes=[2, 3, 5, 7], persist=True, save=True, tracker="bytetrack.yaml")

            # Get the boxes and track IDs
            boxes = results[0].boxes.xywh.cpu()
            track_ids = results[0].boxes.id.int().cpu().tolist()

            # Visualize the results on the frame
            annotated_frame = results[0].plot()

            # Plot the tracks and count objects crossing the line
            for box, track_id in zip(boxes, track_ids):
                x, y, w, h = box
                track = track_history[track_id]
                track.append((float(x), float(y)))  # x, y center point
                if len(track) > 30:  # retain 30 tracks for 30 frames
                    track.pop(0)

                # Check if the object crosses the line
                if START.x < x < END.x and abs(y - START.y) < 5:  # Assuming objects cross horizontally
                    if track_id not in crossed_objects:
                        crossed_objects[track_id] = True

                    # Annotate the object as it crosses the line
                    cv2.rectangle(annotated_frame, (int(x - w / 2), int(y - h / 2)), (int(x + w / 2), int(y + h / 2)), (0, 255, 0), 2)

            # Draw the line on the frame
            cv2.line(annotated_frame, (START.x, START.y), (END.x, END.y), (0, 255, 0), 2)

            # Write the count of objects on each frame
            count_text = f"Objects crossed: {len(crossed_objects)}"
            cv2.putText(annotated_frame, count_text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)

            # Write the frame with annotations to the output video
            sink.write_frame(annotated_frame)
        else:
            break

# Release the video capture
cap.release()

Output:

The output is a mp4 file which will be stored in your environment, the image below represents how line crossing detection is implemented in the entire video.

Output

In this article, we dove into advances of YOLOv8 in object detection. We talked about how it’s super speedy, accurate, and versatile. YOLOv8 is a big deal in computer vision, opening up new possibilities for research and development. Its impact on stuff like autonomous vehicles and surveillance is huge, and there’s tons of potential for more innovation and exploration in the field.

Object Detection using yolov8

In the world of computer vision, YOLOv8 object detection really stands out for its super accuracy and speed. It’s the latest version of the YOLO series, and it’s known for being able to detect objects in real-time. YOLOv8 takes web applications, APIs, and image analysis to the next level with its top-notch object detection. In this article, we will see how yolov8 is utilised for object detection.

Similar Reads

Overview of YOLO

YOLO (You Only Look Once) is a game-changing object detection algorithm that came on the scene in 2015, known for its lightning-fast processing of entire images at once. YOLOv8 is the newest version, taking previous iterations and making them even speedier and more accurate. The YOLO evolution includes versions like YOLOv1, v2, v3, v4, and v5, each bringing improvements like real-time processing, batch normalization, and better detection accuracy. YOLOv8 brings in cutting-edge techniques to take object detection performance even further....

Implementing YOLOv8 for Object Detection

When it comes to spotting and tallying up vehicles, here’s how we do it in three simple steps:...

Object Detection using yolov8 – FAQs

What are the known limitations of YOLOv8?...