{"id":407,"date":"2023-11-09T17:46:38","date_gmt":"2023-11-09T20:46:38","guid":{"rendered":"https:\/\/utfpr.curitiba.br\/lassip\/?p=407"},"modified":"2023-11-09T17:46:40","modified_gmt":"2023-11-09T20:46:40","slug":"lucas-kanade","status":"publish","type":"post","link":"https:\/\/utfpr.curitiba.br\/lassip\/2023\/11\/09\/lucas-kanade\/","title":{"rendered":"Motion tracking with the Lucas-Kanade algorithm (with code)"},"content":{"rendered":"\n<p>To quickly get what we are talking about, take a look at this video:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Lucas-Kanade example (keyboard)\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/rar3kj4TpjI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Now, let&#8217;s clarify some jargon issues. We are talking about <strong>optical flow<\/strong> algorithms. And, when one talks about optical flow algorithms, they are <strong>almost always<\/strong> referring to <strong>dense optical flow<\/strong>. In that case, the optical flow is computed all across an image (actually, a pair of images) and yields a velocity field. The figure below illustrates that well: the bottom image shows the velocity field obtained from the upper image via some optical flow algorithm.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"418\" height=\"582\" src=\"https:\/\/utfpr-ct-static-content.s3.amazonaws.com\/utfpr.curitiba.br\/wp-content\/uploads\/sites\/131\/2023\/11\/image.png\" alt=\"\" class=\"wp-image-408\" srcset=\"https:\/\/utfpr-ct-static-content.s3.amazonaws.com\/utfpr.curitiba.br\/wp-content\/uploads\/sites\/131\/2023\/11\/image.png 418w, https:\/\/utfpr-ct-static-content.s3.amazonaws.com\/utfpr.curitiba.br\/wp-content\/uploads\/sites\/131\/2023\/11\/image-215x300.png 215w\" sizes=\"(max-width: 418px) 100vw, 418px\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">Figure 1 &#8211; <strong>Not <\/strong>the kind of optical flow we are talking about.<br>Source: <a href=\"https:\/\/docs.opencv.org\/4.x\/d4\/dee\/tutorial_optical_flow.html\">https:\/\/docs.opencv.org\/4.x\/d4\/dee\/tutorial_optical_flow.html<\/a><\/p>\n\n\n\n<p>What kind of optical flow are we talking about then? Name it as you like, since it does not have a broadly used name: feature tracking, sparse (please, no) optical flow, corner optical flow, etc. The key idea is: we are tracking specific points (corners) across a video. For this case, there is a widely used, robust, OpenCV-native algorithm called <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lucas%E2%80%93Kanade_method\">Lucas-Kanade method<\/a>. That&#8217;s what we&#8217;ve used in the video in the beginning of the post.<\/p>\n\n\n\n<p>The first thing you need to do when you ask the Lucas-Kanade method to track a corner is&#8230; to identify the corner in the image. Fortunately, OpenCV has an implementation of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Corner_detection#The_Harris_&amp;_Stephens_\/_Shi%E2%80%93Tomasi_corner_detection_algorithms\">Shi-Tomasi corner detection algorithm,<\/a> which finds potentially good points in the initial frame of a video and provides them in a format ready to be passed to the Lucas-Kanade method. Voil\u00e0: that&#8217;s what the script in the video does.<\/p>\n\n\n\n<p>The code is available below and was slightly adapted from <a href=\"https:\/\/docs.opencv.org\/4.x\/d4\/dee\/tutorial_optical_flow.html\">this OpenCV doc page<\/a>. If you&#8217;re eager to reproduce <strong>exactly <\/strong>what you see in the video, here&#8217;s a link to the <a href=\"https:\/\/drive.google.com\/file\/d\/1bDPSHhFdXPFDcPLB6qySWhGDrzwqA7NH\/view?usp=sharing\">input video at Google Drive<\/a>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import numpy as np\r\nimport cv2 as cv\r\nimport matplotlib.pyplot as plt\r\n\r\ncap = cv.VideoCapture('videok.mp4')\r\n# params for ShiTomasi corner detection\r\nfeature_params = dict( maxCorners = 5,\r\n                       qualityLevel = 0.3,\r\n                       minDistance = 7,\r\n                       blockSize = 7 )\r\n# Parameters for lucas kanade optical flow\r\nlk_params = dict( winSize  = (15, 15),\r\n                  maxLevel = 2,\r\n                  criteria = (cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03))\r\n# Create some random colors\r\ncolor = np.random.randint(0, 255, (100, 3))\r\n# Take first frame and find corners in it\r\nret, old_frame = cap.read()\r\nold_gray = cv.cvtColor(old_frame, cv.COLOR_BGR2GRAY)\r\np0 = cv.goodFeaturesToTrack(old_gray, mask = None, **feature_params)\r\n# Create a mask image for drawing purposes\r\nmask = np.zeros_like(old_frame)\r\nlist_x = list()\r\nlist_y = list()\r\nwhile(1):\r\n    ret, frame = cap.read()\r\n    if not ret:\r\n        print('No frames grabbed!')\r\n        break\r\n    frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)\r\n    # calculate optical flow\r\n    p1, st, err = cv.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)\r\n    # Select good points\r\n    if p1 is not None:\r\n        good_new = p1&#091;st==1]\r\n        good_old = p0&#091;st==1]\r\n    # draw the tracks\r\n    for i, (new, old) in enumerate(zip(good_new, good_old)):\r\n        a, b = new.ravel()\r\n        c, d = old.ravel()\r\n        mask = cv.line(mask, (int(a), int(b)), (int(c), int(d)), color&#091;i].tolist(), 2)\r\n        frame = cv.circle(frame, (int(a), int(b)), 5, color&#091;i].tolist(), -1)\r\n    list_x.append(a)\r\n    list_y.append(b)\r\n    img = cv.add(frame, mask)\r\n    cv.imshow('frame', img)\r\n    k = cv.waitKey(30) &amp; 0xff\r\n    if k == 27:\r\n        break\r\n    # Now update the previous frame and previous points\r\n    old_gray = frame_gray.copy()\r\n    p0 = good_new.reshape(-1, 1, 2)\r\ncv.destroyAllWindows()\r\n\r\nplt.plot(list_x, list_y)\r\nplt.axis('equal')\r\nplt.title('Estimated trajectory')\r\nplt.xlabel('x &#091;pixels]')\r\nplt.ylabel('y &#091;pixels]')<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>To quickly get what we are talking about, take a look at this video: Now, let&#8217;s clarify some jargon issues.<\/p>\n","protected":false},"author":44,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[1],"tags":[],"class_list":["post-407","post","type-post","status-publish","format-standard","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/posts\/407","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/comments?post=407"}],"version-history":[{"count":3,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/posts\/407\/revisions"}],"predecessor-version":[{"id":413,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/posts\/407\/revisions\/413"}],"wp:attachment":[{"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/media?parent=407"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/categories?post=407"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/utfpr.curitiba.br\/lassip\/wp-json\/wp\/v2\/tags?post=407"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}