Announcement

Collapse
No announcement yet.

Stitching errors

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Stitching errors

    Hi,

    first of all I am happy that my Panono arrived and it is fun to use it.

    However - Using it, I am unhappy about the stitching errors - frankly it surprises me that every single picture taken has some type of stitching errors.
    The camera layout, the relationship of the cameras to each other, the lens characteristics - all is known, and the micro variations between different Panonos should be possible to be optimized away.
    It is probably "state of the art", but it is far from what I would call perfect.

    Maybe common stitching algorithms such as the one used use less assumptions than they could and therefore compromise?

    I am convinced it should be possible to optimize the result more - way more processing intensive? Definitely.
    If it is too expensive for the cloud, give us an offline option - I do not mean a third party app since it would not know about the very specific Panono parameters/layout. I do not mind if it optimizes hours if necessary for every single pixel.

    Also the "stitching system" should learn over time what the specific characteristics of the individual Panono are so processing/optimizing would get better/cheaper over time.

    I am sure a lot of people will jump in and say "it is not possible", to me that is more a "it has not been done before".
    Over time you would know the trajectory of each "ray" in 3D space of each individual camera pixel and could use that to optimize it even if there are not many features visible.


    Also: Throwing the Panono even in good light, there is always some blurriness remaining. Throwing has become more a novelty for me than a real use case.



    Cheers,

  • #2
    Wow, you'll be billionare when you realize to outwit the laws of physics this way. Probably Google would offer you the first many millions for your technique...

    Comment


    • #3
      Sometimes sarcasm is so douchey.

      In point, considering that most software for editing traditional photos has built-in features for correcting the aberrations of most lenses, it's obvious that this can be done. And considering that most photo-editing software also lets you adjust the default corrections for your individual lenses, it's obvious that this can be done at the user level.

      That said, given the complexity of the Panono's panoramas, the relatively small number that have been sold and are being used in the field (a few hundred, maybe?), and the likelihood that each of them has different aberrations, it's likely that this kind of feature would probably appear if Panono ever released software to allow editing at the local level.

      In the meantime, your best option is probably to provide Panono's support with the URLs of a number of panoramas that you've taken showing these problems and ask whether they consider it within spec. If these are parallax issues because of the proximity of the subject, it probably will be. But Panono support has already offered to replace cameras that had obviously misaligned lenses.

      Comment


      • #4
        Could you post examples of these errors? The stitching results I get are very good.

        Comment


        • #5
          Well, about "stitching" errors: This is a picture that was taken last weekend during a "hackathlon", indoor + HDR: https://www.panono.com/p/LOIyGlmONU3S

          I have to say that most of the "blurryness" comes from the fact that it was indoor + many people were moving, but I am really happy with the picture itself. It was in a well-lit area, put high above our heads (around 2 meters from ground level) and I cannot find any big stiching errors in there. Having that said: The quality of the stitching really depends on how stuff is set up, and the panono "app" is a really terrible example when it comes to stitching. Use the app just to give a quick check if the picture is okay or not, but don't use that to claim that the stitching is horrible... because the app does no stitching.

          EDIT: About the "misaligned lenses", I had a camera that had to be sent back due to one of the cables being loose on the inside. Sent them a picture of my camera output, got quick reply + resulting fix was that I had to send it back for replacement.

          Comment


          • #6
            Originally posted by mrseeker View Post
            Well, about "stitching" errors: This is a picture that was taken last weekend during a "hackathlon", indoor + HDR: https://www.panono.com/p/LOIyGlmONU3S

            I have to say that most of the "blurryness" comes from the fact that it was indoor + many people were moving, but I am really happy with the picture itself. It was in a well-lit area, put high above our heads (around 2 meters from ground level) and I cannot find any big stiching errors in there. Having that said: The quality of the stitching really depends on how stuff is set up, and the panono "app" is a really terrible example when it comes to stitching. Use the app just to give a quick check if the picture is okay or not, but don't use that to claim that the stitching is horrible... because the app does no stitching.

            EDIT: About the "misaligned lenses", I had a camera that had to be sent back due to one of the cables being loose on the inside. Sent them a picture of my camera output, got quick reply + resulting fix was that I had to send it back for replacement.

            Wow, yours came out great on subjects both near and far! Here's what I've been contending with:
            https://www.panono.com/p/3LlRZGZGlAgy

            I understand that nearby things don't tend to stitch as well due to parallax, but this has been terrible.

            Comment


            • #7
              Originally posted by sapsaj View Post
              I understand that nearby things don't tend to stitch as well due to parallax, but this has been terrible.
              Yeah, but this is not (only) a parallax problem. For this type of error one has to understand how stitching works. The software searches for so called control points. Recoginzable Points that can be found in each overlapping image. Those points need to have some kind of contrast to be dertermined. Using the relation of the control points the software combines all the single images.

              In the above example there are white walls overall. There is no structure where control points could be found. Therefore the software can only align the image very rough with the known alignment of the lenses in the ball.

              Indoors you often get a combination of both types of stitching errors (parallax and insufficient image structures) however.

              Hope this helps understanding the errors.

              Comment


              • #8
                In my picture you will see a lot of features. If you take the "raw" picture, you will see that many of my pictures have people on multiple camera's, giving the stitcher a great chance of doing it right. I specifically told the girl to face at least one of the camera's, so that she would be in the middle of a picture, with minimal overlap.

                Another good example would be taking a picture inside a museum vs. taking a picture of a mural. There are a lot of white walls in a museum, and the stitcher will have to find out by looking at the sides of the picture where it would get stitched. Having white walls does not help too much... Having some sort of "identifying" features on each picture that overlap each other will help a lot.

                A nice example would be this: http://www.artofvfx.com/ADJUSTMENT_B...EAU_VFX_02.jpg

                They use the dots behind green screens to "match" the background are there to make sure that the stitching and replacing the background goes smoothly and without any issues. If they just used a green background, that background would be able to shift and look awful.

                Comment


                • #9

                  What I've been thinking about to make is to create a machine learning process that learns per camera
                  what the position of the lenses are relative to each other. Once one knows per stitching calculation
                  rounds what the cutoff lines are per picture, one also knows what the relative position is of the
                  cameras.

                  If you would draw a cone originating from each camera, you know where the intersection is in 3D of two
                  cones from different cameras and that means you would be able to know based on the pixelposition in the
                  picture how far an wall/object is; and optimize this using an algoritm per panono.

                  This info could be used when stitching pictures of objects close to the camera. If I had the time
                  to research / implement this, I would certainly do it..

                  Comment

                  Working...
                  X