I am trying to have 2 scripts running in parallel and feeding one with the other.
First I trained a model to decode different gestures. I followed the tutorial right here: https://www.youtube.com/watch?v=yqkISICHH-U
That script opens the webcam and decodes the gestures I am doing, and create a new variable when the same movement is decoded 3 consecutive times (called mvt_ok
). At that time I wish to send the information to another script that will be an experimental task develloped on psychopy (a python tool to make psychology experiments). Basically, as soon as the first script (gestures detection with the webcam) feeds the second one, I want to present another stimulus for the second one (psychopy task).
To summarise, I wish to open the video, then start the script (psychopy) and present the first simulus, then a movement is expected to be detected with the video. This information should be fed to the psychopy script to change stimulus.
So far I am really far of doing that and I have just been able to send movement ok to another script with a function such as the one following:
def f(child_conn,mvt_ok):
print(mvt_ok)
Actually I am not sure how I could reuse the mvt_ok
variable to feed it to the my psychopy script.
I won't put all the lines for the part handling the gesture recognition because it is maybe too long but the most crucial ones are here:
if __name__ == '__main__':
parent_conn,child_conn = Pipe()
sentence = []
while cap.isOpened():
ret, frame = cap.read()
image_np = np.array(frame)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
# detection_classes should be ints.
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.8,
agnostic_mode=False)
cv2.imshow('object detection', cv2.resize(image_np_with_detections, (800, 600)))
if np.max(detections['detection_scores'])>0.95:
word = category_index[detections['detection_classes'][np.argmax(detections['detection_scores'])]+1]['name']
sentence.append(word)
if len(sentence)>=3:
if sentence[-1]==sentence[-2] and sentence[-1]==sentence[-3]:
print('ok')
mvt_ok=1
p = Process(target=f, args=(child_conn,mvt_ok))
p.start()
p.join()
if cv2.waitKey(10) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break