As titled. Tried to follow the guide from here. Have some problems running realsense camera. The guide from jetsonhack is okay. But cannot find pyrealsense2 in python. Adding pythonpath solved the import problem but the imported package contained no method. Copying all so files (pybackend2.cpyton-*) to the working folder seems to resolve the problem.
The test code appears to work but is very slow. Need to figure out where the problem is.
Update:
Not sure what made his code so slow. But can get real-time detection using model directly as simply as follows.
import numpy as np import pyrealsense2 as rs import cv2 import torch config = rs.config() config.enable_stream(rs.stream.color, 640,480, rs.format.bgr8,30) config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) pipeline = rs.pipeline() profile = pipeline.start(config) align_to = rs.stream.color align = rs.align(align_to) model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) while True: frames = pipeline.wait_for_frames() aligned_frames = align.process(frames) color_frame = aligned_frames.get_color_frame() depth_frame = aligned_frames.get_depth_frame() img = np.asanyarray(color_frame.get_data()) depth_image = np.asanyarray(depth_frame.get_data()) cv2.imshow('color image',img) cv2.imshow('depth image',depth_image) cv2.imshow('detected',model(img).render()[0]) if cv2.waitKey(1) & 0xFF == ord('q'): break pipeline.stop() cv2.destroyAllWindows()
N.B. Need to add export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 in .bashrc. Otherwise, cv will have memory allocation error when running torch.hub.load.
Also note that we didn’t use depth information for object detection. But the original demo code didn’t do that as well.