VISR

Status
Not open for further replies.

fullmetal56

Jr Member
So just because I'm a curious bastard, this question is posed to any and all programmers. How difficult would it be to create a program that would act similar to the VISR (spelling?) in ODST or the Promethean Vision in Halo 4 and have it run from the combined info from two small cameras and an arduino and then display it on a small screen inside of a helmet?
 
Sorry, allow me to clarify. What I mean, is to have the processor highlight edges on the screen and be able to tell the difference between flora, fauna, buildings, and humans, and outline each one in different colors.
 
hum I'm not sure the back up cameras cars have can but I don't know if you can copy it for this job.
 
You can teach a machine to recognise somewhat specific shapes and track targets, so theoretically people-shapes and the straight edges of buildings should be relatively straightforward. It could look at the colour and perspective of buildings to try to figure out which straight lines it's seeing belong to which real-world object, but that would be quite difficult.

Plants might be tricky and it would likely get confused by images of plants, floral patterns and prints, and complex general patterns. Maybe simple things like trees, where it's a fairly defined shape and could run under the same engine as the person-identifier, would work. Animals... Can be very different shapes depending on movement, angle, and behaviour. Ideally as a backup to the human and animal visual IDs you'd run a tertiary LWIR system that could be cross-correlated with the visible spectrum imaging.

Putting a line around them would be surprisingly difficult to do as well, especially in real time- Photoshop can do some amazing things but edge detection at the critical point of hard-to-visually-distinguish is very difficult and not at all real-time. Basic edge detection is relatively straightforward, but especially with the plants, it may pull in random bits of other stuff that look like they're part of it. Though plants do show up white on IR film, which means that you could potentially use a different wavelength of IR- SWIR, I guess, which is what standard night vision is, and try to pull general shapes from that as a correlation with the VIS feed.

So... it'd take a lot more than two cameras, an Arduino wouldn't have a snowball's chance in hell of running the software (you may even need a constant cloud connection to say, Amazon AWS to run it intensively), and having personally used a pair of real FLIR Recon III binoculars, I'd be quite surprised if you could do it to the point where it fits inside a helmet. And costs less than $30,000. And has a useful screen resolution and refresh rate. And a good FoV.

Basically... Nah. Ain't gonna happen for at least 20 years, and only then if DARPA decides it's worth throwing money at.

Edit: I know, I know, I'm good at explaining why things won't work... It's not for lack of wanting though!
 
I have had a similar idea, as for a program that does what the VISR system does in ODST, I am not too sure but I imagine that you'd need something like this...
http://www.amazon.com/Lilliput-569g...ref=sr_1_5?s=pc&ie=UTF8&qid=1430876855&sr=1-5

My idea has been for helmets like the gungnir, and hooking up a GoPro or some small wide angle camera like that.

Actually there's a company that makes a small screen for arduino. Last I saw they were on kickstarter. No clue if they succeeded or not though.

Edit: here it is: https://tiny-circuits.com/tinyscreen.html

- - - Updated - - -

You can teach a machine to recognise somewhat specific shapes and track targets, so theoretically people-shapes and the straight edges of buildings should be relatively straightforward. It could look at the colour and perspective of buildings to try to figure out which straight lines it's seeing belong to which real-world object, but that would be quite difficult.

Plants might be tricky and it would likely get confused by images of plants, floral patterns and prints, and complex general patterns. Maybe simple things like trees, where it's a fairly defined shape and could run under the same engine as the person-identifier, would work. Animals... Can be very different shapes depending on movement, angle, and behaviour. Ideally as a backup to the human and animal visual IDs you'd run a tertiary LWIR system that could be cross-correlated with the visible spectrum imaging.

Putting a line around them would be surprisingly difficult to do as well, especially in real time- Photoshop can do some amazing things but edge detection at the critical point of hard-to-visually-distinguish is very difficult and not at all real-time. Basic edge detection is relatively straightforward, but especially with the plants, it may pull in random bits of other stuff that look like they're part of it. Though plants do show up white on IR film, which means that you could potentially use a different wavelength of IR- SWIR, I guess, which is what standard night vision is, and try to pull general shapes from that as a correlation with the VIS feed.

So... it'd take a lot more than two cameras, an Arduino wouldn't have a snowball's chance in hell of running the software (you may even need a constant cloud connection to say, Amazon AWS to run it intensively), and having personally used a pair of real FLIR Recon III binoculars, I'd be quite surprised if you could do it to the point where it fits inside a helmet. And costs less than $30,000. And has a useful screen resolution and refresh rate. And a good FoV.

Basically... Nah. Ain't gonna happen for at least 20 years, and only then if DARPA decides it's worth throwing money at.

Edit: I know, I know, I'm good at explaining why things won't work... It's not for lack of wanting though!

What about taking the thermal camera from flir's iPhone 5 case and overlaying the thermal information onto a visible spectrum camera's feed and then just telling a processor to only show the outer pixels of the heat sources?
 
I mean all together it would still be around $1000...assuming it could work but if the program was open source I'd definitely be willing to spend the money to do it.

Surely an arduino could handle that? Or a group of arduinos linked together?

Just trying to think outside the box to come up with a solution that will bring a similar effect to something that isn't feesible without massive sums of money.
 
To give my two cents in this discussion:

First off, coming from a Computer Science background myself detection of colors, shapes, shading, and minor movement is already difficult to do. Take, for instance, this picture:

fired-roasted-pasta.jpg

Say you wanted to run a VISR program on this and just as examples, the noodles were trees, the herbs were rocks, the Parmesan was small plants, and the tomatoes were people. First off, you would have to give the program some rules to follow like long thin lighter colored objects were trees, the white flecks were small plants, red, round objects were people and green objects were rocks. Seems simple enough right?

Certain areas of the photo would have issues with object detection, That tomato on the far left with its light reflection point could have a lot of it be confused with a tree or even small plants. The range of colors that would have to be given to each type of object to allow for full detection would have overlap. creating a whole mish-mosh of random objects which may or may not reflect the actual object you are looking at.

This in and of itself would include years of coding to get anything correct enough to use in the field. and taking into account the average starting Computer Science salary at around 70 to 90 thousand a year, fresh out of college, the people alone would raise the price of the device a whole lot. Say for instance you have one programmer work on this project, and say it took 5 years to get something that would work in the field, you are looking at a minimum of paying that one person 350 thousand for him alone to work on that project. Materials and prototyping will add to that number, and with today's tech there is no way you are gonna get that one 2 arduinos let alone 1. This program would have to be on at least a laptop with top of the line processing. It may be possible to fit the screen portion in the helmet but you would add a good 10+ pounds of gear that the person has to carry and if any of it gets damaged in any way the whole thing would shut down.

But have no fear, you can have yourself your own night vision apparatus on something as simple as a small camera, a few IR LEDs, and a viewfinder from an old VHS Camcorder that recorded everything to tapes used in VCRs (Old tech still helps) and if you want to do that, there is a video by kipkay explaining what you need and how to do it here.

Provided it wont have edge detection and full up to date edges at that, but for around twenty bucks, it will still give you the edge in any sort of battle you go into, and with everything being contained explicitly to the helmet, there is less of a chance for it to get damaged. Oh and you can take off your helmet too.

TL;DR Edge detection is hard, Arduinos aren't enough, you will be carrying way too much, and it would be way out of the $1000 budget you set, for a while at least. stick with what you have this stuff is a good 5+ years away. Oh and if you really want night vision click the link up there ^.

Edit: I'm not trying to be mean, so please don't take it that way
 
Your easiest option would be to modify a game console camera as they are already built to detect people, shapes, depth and lighting
 
What about taking the thermal camera from flir's iPhone 5 case and overlaying the thermal information onto a visible spectrum camera's feed and then just telling a processor to only show the outer pixels of the heat sources?

You could do that as a relatively straightforward graphical process, sure, as long as you don't mind everything from dogs to lightbulbs having coloured lines around them. It's certainly the most feasible option.

I mean all together it would still be around $1000...assuming it could work but if the program was open source I'd definitely be willing to spend the money to do it.

Surely an arduino could handle that? Or a group of arduinos linked together?

Just trying to think outside the box to come up with a solution that will bring a similar effect to something that isn't feesible without massive sums of money.

If you did it with just a FLIR Lepton core, a VIS camera and attempted to get away with a handful of Raspberry Pis or something as essentially a render farm, the hardware could maybe cost under $1000- as long as your screen/lens setup for your HUD was super cheap. That's just the beginning, though...

As Barroth points out, the time of a programmer is valuable, and R&D is variable. Even if you used a student or hobbyist programmer who don't need the salary of a full CS grad, you're probably still looking at $5-10k for 3 months' work, but something may happen where it takes six or eight months instead, and the price balloons. This is why R&D teams are on staff at fixed salaries, and a lot of stuff doesn't get open-sourced.

You'd also need a programmer who's experienced in embedded, so that they can deal with all the non-software aspects of the project. So you're looking for an electronic engineer as much as a computer scientist, really a bit of both. I'm that way inclined, I like my hardware and software as much as each other on a cool project, but I don't think it's particularly the norm. It's usually split across teams, as far as I'm aware, but I've been out of academia for a few years.
 
You could do that as a relatively straightforward graphical process, sure, as long as you don't mind everything from dogs to lightbulbs having coloured lines around them. It's certainly the most feasible option.



If you did it with just a FLIR Lepton core, a VIS camera and attempted to get away with a handful of Raspberry Pis or something as essentially a render farm, the hardware could maybe cost under $1000- as long as your screen/lens setup for your HUD was super cheap. That's just the beginning, though...

As Barroth points out, the time of a programmer is valuable, and R&D is variable. Even if you used a student or hobbyist programmer who don't need the salary of a full CS grad, you're probably still looking at $5-10k for 3 months' work, but something may happen where it takes six or eight months instead, and the price balloons. This is why R&D teams are on staff at fixed salaries, and a lot of stuff doesn't get open-sourced.

You'd also need a programmer who's experienced in embedded, so that they can deal with all the non-software aspects of the project. So you're looking for an electronic engineer as much as a computer scientist, really a bit of both. I'm that way inclined, I like my hardware and software as much as each other on a cool project, but I don't think it's particularly the norm. It's usually split across teams, as far as I'm aware, but I've been out of academia for a few years.

Ok, so it's feesible, and to eliminate objects such as light bulbs, you tell it to only display temperatures between say...96.5 degrees and 99.5 degrees. On average a person isn't going to have a core temp below or above that range...usually...unless they're sick. And personally, I would want animals to show up, but, knowing that core temps of each animal would be different, then most of them would be eliminated from the screen with that range, which is also ok. This could at least be used as a starting point. Then later on, assuming the program is open source, after the project gained popularity with the community and other communities, then I'm sure other people would want to get involved and improve the software. The only real problem I see to start out is the hardware money needed and volunteers who would want to take the challenge on and write the code in their spare time, kind of like a hobby.
 
Ok, so it's feesible, and to eliminate objects such as light bulbs, you tell it to only display temperatures between say...96.5 degrees and 99.5 degrees. On average a person isn't going to have a core temp below or above that range...usually...unless they're sick. And personally, I would want animals to show up, but, knowing that core temps of each animal would be different, then most of them would be eliminated from the screen with that range, which is also ok.

That was my initial thought, but you'd have to implement more of a sliding scale, since the processor alters the gain depending on what's in the image, so as to maintain maximum contrast and clarity. That would be harder to do than it sounds, lots of calibration required, since you can't just say "here's the temperature range I need"- it's more akin to tonal values and ISO in photography, but with the addition of on-the-fly tonal mapping changes, where for any given frame you may be looking at 70-80 intensity, 135-170 intensity or if you're close in, almost the entire 0-255 range is taken up by the subtleties of one person's heat signature...

There's also the fact that medium-toned surfaces (like, say, bricks and tree trunks) in the summer most likely reach around body temperature for much of the day, which would be impossible to eliminate according to the basic rule of a set intensity range.

Then there's the fact that clothing and equipment will drastically alter thermal signatures- people might become almost invisible to the thermal camera in the snow when their body is heavily insulated, or if they're wearing some kind of strapping system the graphics engine may draw lines all over the person rather than around their periphery.

This could at least be used as a starting point. Then later on, assuming the program is open source, after the project gained popularity with the community and other communities, then I'm sure other people would want to get involved and improve the software. The only real problem I see to start out is the hardware money needed and volunteers who would want to take the challenge on and write the code in their spare time, kind of like a hobby.

If you can start something of a very basic functionality, set up a wiki and a forum, you may be able to drum up interest slowly over time, especially from places like Instructables. All of my open source hardware time is given to CNC and motion control because that's what I personally love, but you can probably find people who are into graphical processing and machine learning- I'm sure there are at least 10 out there with the required interests and skill sets, probably up to 150 with the same interest and overlapping skill sets. That's a decent project team.

But you can't really act as a producer, you have to get hands-on, no matter how simple your initial tentative steps are. People have to see something to latch onto, have to see where they fit in in the progression of the system and have independent ideas how to improve on it.
 
You could always read things in the infared spectrum and use the imaging from it to project silhouettes around objects.

It would only work with lifeforms
Then you're main issue is topology and getting the software to only outline there extremities, and not just their core or levels of their face and whatnot.


But if you are just making a helmet, just toss some LED's on the inside to illuminate your face and then just turn them on and be like "yep, I haz VIZR".
It's been done in the past.
 
Status
Not open for further replies.
Back
Top