This may be more of a question for a 3D CAD or some other sort of 3D forum, but I’m coming at it from the photo side of things. Likely someone has had a similar question.
I’d like to use two flat 2d images, both with a common background in a composite image. Both pictures are taken pointing the same direction. One is just taken a good distance behind where the other one is from.
I can adjust the scale of the rear picture so the common distant objects are the same height. However, objects in foreground, close to the position of the camera are not the right height when compared to objects at that distance in the photo taken second picture a distance behind the first.
I saw that GIMP has a 3D perspective transform too, but it looks like it would not really be of any use for what I’m thinking.
I suppose maybe using a different lens for the two pictures is an option.
It also seems like this may be possible with software. I only use GIMP a little. I’ve used some 3D applications even less, but have used Sketchup a little most recently.
It allows you to create a rectangle then pull or push the face of it up or down to create a cubic shape. You can then adjust the size of one of the faces to create some sort of trapezoidal cubic shape, create a beveled edge or what not.
I was thinking if you could put an image on the original rectangle and leave it on it as you pulled it up and adjusted the size, maybe you could do this. The image is a raster image and the 3D app uses vector images, but in the intitual cubic shape after the rectangle has just been pulled up each pixel in the top rectangle would line up with the same pixel in the bottom rectangle and the could be connected with a straight line. So any adjustments later, those lines would still be there. At this point the location of each pixel on the initial rectangle surface is known and now connected by lines there is the makings to create 3D geometry and relationships to create a 3D vector image.
So the ultimate image adjusted for perspective would be composed of pixels from a series of slices and where they fit on the surface of each slice. So if you adjusted the top layer so it is smaller, the middle would be the top slice, each outer slice would be the pixels on the lines that are outside the the boundaries of the top slice.
This is sort of how 3D printing is done, although using vector graphics all along. I also remember where they took some donated human cadavers which I guess they may have added some dies to indicated blood vessels and other structures and frozen and sliced them into very thin slices and took a picture or created a digital image of each slice. Then reassembled all the digital slices to create a 3D image of the bodies.
I probably don’t quite have it right how it all works, but it looks like something along these lines is already being done for other things. Anyone know if there is an app or an easy way to do this?
I’d like to use two flat 2d images, both with a common background in a composite image. Both pictures are taken pointing the same direction. One is just taken a good distance behind where the other one is from.
I can adjust the scale of the rear picture so the common distant objects are the same height. However, objects in foreground, close to the position of the camera are not the right height when compared to objects at that distance in the photo taken second picture a distance behind the first.
I saw that GIMP has a 3D perspective transform too, but it looks like it would not really be of any use for what I’m thinking.
I suppose maybe using a different lens for the two pictures is an option.
It also seems like this may be possible with software. I only use GIMP a little. I’ve used some 3D applications even less, but have used Sketchup a little most recently.
It allows you to create a rectangle then pull or push the face of it up or down to create a cubic shape. You can then adjust the size of one of the faces to create some sort of trapezoidal cubic shape, create a beveled edge or what not.
I was thinking if you could put an image on the original rectangle and leave it on it as you pulled it up and adjusted the size, maybe you could do this. The image is a raster image and the 3D app uses vector images, but in the intitual cubic shape after the rectangle has just been pulled up each pixel in the top rectangle would line up with the same pixel in the bottom rectangle and the could be connected with a straight line. So any adjustments later, those lines would still be there. At this point the location of each pixel on the initial rectangle surface is known and now connected by lines there is the makings to create 3D geometry and relationships to create a 3D vector image.
So the ultimate image adjusted for perspective would be composed of pixels from a series of slices and where they fit on the surface of each slice. So if you adjusted the top layer so it is smaller, the middle would be the top slice, each outer slice would be the pixels on the lines that are outside the the boundaries of the top slice.
This is sort of how 3D printing is done, although using vector graphics all along. I also remember where they took some donated human cadavers which I guess they may have added some dies to indicated blood vessels and other structures and frozen and sliced them into very thin slices and took a picture or created a digital image of each slice. Then reassembled all the digital slices to create a 3D image of the bodies.
I probably don’t quite have it right how it all works, but it looks like something along these lines is already being done for other things. Anyone know if there is an app or an easy way to do this?