+-
c – OpenCV:来自decomposeHomographyMat的奇怪旋转和平移矩阵
我试图使用OpenCV库中的findHomography函数解决plane2plane投影问题.作为玩具示例,我在R2,P中有一组点,在R2,Q中有第二组点,其中Qx = Px 50,Qy = Py.意思是我将x坐标偏移了50.现在我运行以下代码:

Mat projectionMatrix = findHomography(Q, P);
vector<Point2f> projectedPoints(objectCoordinates.size());
perspectiveTransform(Q, projectedPoints, projectionMatrix);

这给了我P,这很棒.但是,现在我想要旋转和平移矩阵,R& T,这是我感到困惑的地方. OpenCV 3有一个名为decomposeHomographyMat的函数,它为R和T返回最多4个解法(也返回法线但我不存储它们).我这样运行:

vector<Mat> Rs;
vector<Mat> Ts;
decomposeHomographyMat(projectionMatrix, cameraMatrix, Rs, Ts, noArray());

我使用的cameraMatrix是从以前的实验中尝试和测试的.好的,所以我得到了我的四个结果.看着它们,我注意到我得到了单一矩阵作为所有R的结果,这很棒.然而,所有的平移向量都是[0,0,0] T,而我预计它们中至少有一个是[-50,0,0] T.我需要对decomposeHomographyMat的结果做些什么来获得预期的行为吗?

谢谢

最佳答案
事实证明我在某些方面是错的,所以我决定重写这个答案.

简而言之 – 由于不正确的内在参数矩阵,您得到了奇怪的结果.

使用文章“Malis,E和Vargas,M”中的术语,“深入理解基于视觉的控制的单应性分解”(OpenCV中的单应性分解基于此),透视变换用H表示,并调用欧几里德单应矩阵,其归一化结果G = K ^ -1 * H * K(其中K是相机的校准矩阵)称为单应矩阵

cv :: findHomography()和cv :: decomposeHomographyMat()都与欧几里德单应矩阵一起工作.但是为了将其分解为平移和旋转,cv :: decomposeHomographyMat()将欧几里德单应矩阵归一化以获得单应矩阵.它依赖于用户提供的K来执行此规范化.

至于K的估计,我认为这超出了这个问题的范围.这个问题叫做Camera auto-calibration,这是这篇wiki文章的相关引用:

Therefore, three views are the minimum needed for full calibration with fixed intrinsic parameters between views. Quality modern imaging sensors and optics may also provide further prior constraints on the calibration such as zero skew (orthogonal pixel grid) and unity aspect ratio (square pixels). Integrating these priors will reduce the minimal number of images needed to two.

看起来,您可以在零偏斜和方形像素假设下从相同相机的2帧中提取图像对应的K.但是,我不熟悉这个主题,所以不能给你更多的建议.

因此,为了检查我的解释是否正确,我做了一个小例子,在2个虚拟摄像机上投影3D平面上的某些点,找到单应性,分解它,并允许将此分解与地面真实旋转和平移向量进行比较.这比实际输入更好,因为这样我们就可以精确地知道K,并且可以将其估计中的误差与R和t中的误差分离.对于输入我已经检查过它能够正确地估计旋转和平移向量,尽管由于某种原因,平移总是小于地面实例10倍.也许这种分解仅限于一个尺度(我现在不确定),但有趣的是,它与具有固定系数的地面实况值相关.

这是来源:

#include <opencv2/opencv.hpp>
#include <iostream>
#include <vector>


int main() {
  // set up a virtual camera
  float f = 100, w = 640, h = 480;

  cv::Mat1f K = (cv::Mat1f(3, 3) <<
      f, 0, w/2,
      0, f, h/2,
      0, 0,   1);

  // set transformation from 1st to 2nd camera (assume K is unchanged)
  cv::Mat1f rvecDeg = (cv::Mat1f(3, 1) << 45, 12, 66);
  cv::Mat1f t = (cv::Mat1f(3, 1) << 100, 200, 300);

  std::cout << "-------------------------------------------\n";
  std::cout << "Ground truth:\n";

  std::cout << "K = \n" << K << std::endl << std::endl;
  std::cout << "rvec = \n" << rvecDeg << std::endl << std::endl;
  std::cout << "t = \n" << t << std::endl << std::endl;

  // set up points on a plane
  std::vector<cv::Point3f> p3d{{0, 0, 10},
                               {100, 0, 10},
                               {0, 100, 10},
                               {100, 100, 10}};

  // project on both cameras
  std::vector<cv::Point2f> Q, P, S;

  cv::projectPoints(p3d,
                    cv::Mat1d::zeros(3, 1),
                    cv::Mat1d::zeros(3, 1),
                    K,
                    cv::Mat(),
                    Q);

  cv::projectPoints(p3d,
                    rvecDeg*CV_PI/180,
                    t,
                    K,
                    cv::Mat(),
                    P);

  // find homography
  cv::Mat H = cv::findHomography(Q, P);

  std::cout << "-------------------------------------------\n";
  std::cout << "Estimated H = \n" << H << std::endl << std::endl;


  // check by reprojection
  std::vector<cv::Point2f> P_(P.size());
  cv::perspectiveTransform(Q, P_, H);

  float sumError = 0;

  for (size_t i = 0; i < P.size(); i++) {
    sumError += cv::norm(P[i] - P_[i]);
  }

  std::cout << "-------------------------------------------\n";
  std::cout << "Average reprojection error = "
      << sumError/P.size() << std::endl << std::endl;


  // decompose using identity as internal parameters matrix
  std::vector<cv::Mat> Rs, Ts;
  cv::decomposeHomographyMat(H,
                             K,
                             Rs, Ts,
                             cv::noArray());

  std::cout << "-------------------------------------------\n";
  std::cout << "Estimated decomposition:\n\n";
  std::cout << "rvec = " << std::endl;
  for (auto R_ : Rs) {
    cv::Mat1d rvec;
    cv::Rodrigues(R_, rvec);
    std::cout << rvec*180/CV_PI << std::endl << std::endl;
  }

  std::cout << std::endl;

  std::cout << "t = " << std::endl;
  for (auto t_ : Ts) {
    std::cout << t_ << std::endl << std::endl;
  }

  return 0;
}

这是我机器上的输出:

-------------------------------------------
Ground truth:
K =
[100, 0, 320;
0, 100, 240;
0, 0, 1]

rvec =
[45;
12;
66]

t =
[100;
200;
300]

-------------------------------------------
Estimated H =
[0.04136041220427821, 0.04748763375951008, 358.5557917287962;
0.05074854454707714, 0.06137211243830468, 297.4585754092336;
8.294458769850147e-05, 0.0002294875562580223, 1]

-------------------------------------------
Average reprojection error = 0

-------------------------------------------
Estimated decomposition:

rvec =
[-73.21470385654712;
56.64668212487194;
82.09114210289061]

[-73.21470385654712;
56.64668212487194;
82.09114210289061]

[45.00005330430893;
12.00000697952995;
65.99998380038915]

[45.00005330430893;
12.00000697952995;
65.99998380038915]


t =
[10.76993852870029;
18.60689642878277;
30.62344129378435]

[-10.76993852870029;
-18.60689642878277;
-30.62344129378435]

[10.00001378255982;
20.00002581449634;
30.0000336510648]

[-10.00001378255982;
-20.00002581449634;
-30.0000336510648]

正如您所看到的,假设中存在正确的旋转矢量估计,并且存在一个按比例正确的正确估计的翻译.

点击查看更多相关文章

转载注明原文:c – OpenCV:来自decomposeHomographyMat的奇怪旋转和平移矩阵 - 乐贴网