Jump to content

NAVFLIR hotspot detector - a proposal


Flagrum

Recommended Posts

Ok. I'll try to explain a bit. The current World (what you "see" is one Iteration through all objects, area layers, trees, AI objects etc. for visibility) now you need to do this again for the "IR value" up to doubling the required CPU cycles.

 

Or on each object iterated from the list, you might perhaps perform both checks in a row, visual and IR one.

 

There must be a way to implement this; just concluding that nothing can't be done because this whole IR sensor problem wasn't envisaged beforehand is not really a satisfying answer for a serious flight simulation.

 

Of course, I don't expect this to be implemented immediately. If we have to wait for the DX12 version of the engine and multi-core support, so be it, it would just be nice to hear that this is being considered and will be resolved at some point.

 

Regarding the AV-8B module, the full feature implementation can be activated later on once the ED adds the support for it and implements the necessary changes in the engine and until then, it might be disabled completely or only made to show active objects, I guess.

 

Maybe it can be made as a module option so each user can choose whichever option suits him more?


Edited by Dudikoff

i386DX40@42 MHz w/i387 CP, 4 MB RAM (8*512 kB), Trident 8900C 1 MB w/16-bit RAMDAC ISA, Quantum 340 MB UDMA33, SB 16, DOS 6.22 w/QEMM + Win3.11CE, Quickshot 1btn 2axis, Numpad as hat. 2 FPH on a good day, 1 FPH avg.

 

DISCLAIMER: My posts are still absolutely useless. Just finding excuses not to learn the F-14 (HB's Swansong?).

 

Annoyed by my posts? Please consider donating. Once the target sum is reached, I'll be off to somewhere nice I promise not to post from. I'd buy that for a dollar!

Link to comment
Share on other sites

The only other thing that I can think of for a system is using the current IR system, which essentially (if I'm not mistaken) is what we as the user is able to see but is a different texture to the sensor. Objects likely to be hot are white, objects likely to be colder are darker. Would it be possible some way to get the sensor, for it's FOV to take a look at this image, and in a similar way to camera face detection find say the areas of white, lets say it points out the 10 most intense spots or 10 random spots and then gives you the V over that area, the hot-spot could be your target, but it could be also be a false contact, giving you a better representation of the sensor as opposed to a god-mode sensor that will only pick out objects or a randomised sensor. So instead of interrogating objects individually, it interrogates the in game texture that normally our eyes and brains have to interpret to find hot-spots. Maybe this suggestion is even worse performance wise but I don't know.

Agreed, an alternative picture process to DCS's faux IR filter would be best - it wouldn't be a 1:1 map with the real hot spot detector but would convey the features and feel of the real.

 

I'm thinking of an algorithm similar to the 'sketch' function found in some photo apps, etc.

 

attachment.php?attachmentid=171589&stc=1&d=1509804510

 

attachment.php?attachmentid=171590&stc=1&d=1509804510

Maykop_low_Screen_170812_222922.jpg.2b83e06ce4cb6289ac6a94835bbbcadf.jpg

Maykop_low_sketch_picturetopeople.org-82e3d22fe24dba015b0570b72ca1982056cad603e4f58c0a69.jpg.49156b2f31569de79f5b09871f58b8bb.jpg

i9 9900K @4.7GHz, 64GB DDR4, RTX4070 12GB, 1+2TB NVMe, 6+4TB HD, 4+1TB SSD, Winwing Orion 2 F-15EX Throttle + F-16EX Stick, TPR Pedals, TIR5, Win 10 Pro x64, 1920X1080

Link to comment
Share on other sites

Agreed, an alternative picture process to DCS's faux IR filter would be best - it wouldn't be a 1:1 map with the real hot spot detector but would convey the features and feel of the real.

 

I'm thinking of an algorithm similar to the 'sketch' function found in some photo apps, etc.

 

attachment.php?attachmentid=171589&stc=1&d=1509804510

 

attachment.php?attachmentid=171590&stc=1&d=1509804510

That's what we currently have. Does not help with detecting hotspots, it only gives us the picture for say, a TGP screen, and that is pretty much what you already see in MFDs .

 

To detect "spots" we need coordinates to draw the markers on. As well as iterate all the objects likely to create hot spots.

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

That's what we currently have. Does not help with detecting hotspots, it only gives us the picture for say, a TGP screen, and that is pretty much what you already see in MFDs .

 

To detect "spots" we need coordinates to draw the markers on. As well as iterate all the objects likely to create hot spots.

 

I get the coordinates bit but what about using the DCS quasi IR filter to do this already, the terrain has natural dark and light areas already, and units should be fairly white to FLIR with the system we already have. We just need a system that can process what's available to it exactly the same as face detection in cameras, only we want a sensor to pick out the bright spots in a darker background and pick up to 10 of them (I gave you examples of most picking the most intense or completely random in the FOV). I understand the need for coordinates for the HUD markers but nothing has to be done to the current infrared system - all we need is to is give the sensor the picture that would be in a TGP and devise a system that can pick out the white bits and then select up to 10 of them in view.

 

Again just like face detection - taking an image that we already have for our TGPs and then processing it, if we got a system that took the image on the TGP and pointed out the white (hot) bits the rest is getting a HUD marker to coincide with the white bits which shouldn't be really any more difficult than TGP/Shkval HUD aiming cues just now there are multiple of them and they're on the white bits after that job done, dynamic realistic Arma 3 style IR system isn't necessary.


Edited by Northstar98

Modules I own: F-14A/B, Mi-24P, AV-8B N/A, AJS 37, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Link to comment
Share on other sites

What about using the DCS quasi IR filter to do this already, we just need a system that can process what's available to it, like face detection in cameras, only we want a sensor to pick out the bright spots in a darker background and pick 10 of them (I gave you examples of most intense or completely random). I understand the need for the HUD markers but nothing has to be done to the current infrared system - all we need is to have the picture on the TGP and then devise a system than can pick out the white bits and then select up to 10 of them in view.

 

Again just like face detection.

With what cpu power do you want to process a "face recognition algorithm" in realtime(!)?

 

We're going in circles here... Don't you think a bunch of highly qualified developers that do this for a living know what they are doing?

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

With what cpu power do you want to process a "face recognition algorithm" in realtime(!)?

 

We're going in circles here... Don't you think a bunch of highly qualified developers that do this for a living know what they are doing?

 

Really? Honestly?

 

You can insult my intelligence all you like shagrat but you're honestly telling me that I can't do this

whilst playing DCS because my CPU will melt, even though I can run facetracknoir with FSX which runs worse than DCS. All we need to do is get what the video is demonstrating and do that for faux IR image already in DCS instead of the webcam and have this so it can select 10 bits of white then give a marker on the HUD to where it is in similar fashion to TGP aiming cues.
Edited by Northstar98

Modules I own: F-14A/B, Mi-24P, AV-8B N/A, AJS 37, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Link to comment
Share on other sites

That's what we currently have.

I know, it's why it might be a possible solution.

Does not help with detecting hotspots, it only gives us the picture for say, a TGP screen, and that is pretty much what you already see in MFDs .

Correct.

 

I was thinking that as the HUD can/will already display a FLIR overlay for night operations - the same tech could be used to display the day time markers, the disadvantage would be that the image processing to display the 'V' s would need to be on the fly.

 

attachment.php?attachmentid=171600&stc=1&d=1509814564

 

To detect "spots" we need coordinates to draw the markers on. As well as iterate all the objects likely to create hot spots.

As you have explained, it's out of scope for DCS to iterate all the objects. My impression of the HUD FLIR markers is they are quite limited in the range and size of object marked i.e. buildings, towers, etc., not single vehicles. There's video of an airshow and vehicles in the car park are ignored by the FLIR.

 

Hope this helps explain my current thinking for how it *might* be done but I'd be grateful for any implementation given the difficulty.

2559885_AV-8BHUDmarkers.thumb.jpg.152985a61bc61fd93832e4644ec4d12e.jpg

i9 9900K @4.7GHz, 64GB DDR4, RTX4070 12GB, 1+2TB NVMe, 6+4TB HD, 4+1TB SSD, Winwing Orion 2 F-15EX Throttle + F-16EX Stick, TPR Pedals, TIR5, Win 10 Pro x64, 1920X1080

Link to comment
Share on other sites

Calm DOWN!

 

You are starting to drive me off the edge, no doubt I'll be reported for this.

 

You are honestly telling me that a system that does the same thing as a simplified magic wand tool from photoshop elements 8 is going to cause my CPU to melt. I mean what you're telling me is that I can't run facetracknoIR (which is free by the way) with DCS because my CPU isn't powerful enough, REALLY!!!

 

Make a mockery of me all day long shagrat, but stop giving me rubbish. Because facetracknoir can do image processing (can pick out my facial features) and I can run FSX (which has worse performance on DCS) without a problem. Thats facial recognition, that's in real time - can't complain of any performance issues!

 

AND ALL WE NEED TO DO IS PICK OUT WHITE FROM BLACK!!! (sorry for the caps, but I fear your missing the key message

You need to store the results with coordinates reference in a table and iterate through ALL the resulting "white/black" spots against the position of the sensor... For every frame that is.

 

Again, consider that the developers programming one of the most advanced combat simulations on the civil market, may know what they are doing?

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

The map is requiring memory, the OBJECTS (trees, buildings, traffic signs) need to be iterated through a list.

That means the list needs to be checked regularly...

If it was that easy and not requiring ressources I am sure we would have it already.

 

So go and play any alternative that is better in simulating modern aircraft... Oh wait a second. :dunno:

 

Well since you are at a loss may I suggest the F-16 at Bench Mark Sims... :music_whistling:

[sIGPIC][/sIGPIC]

i7 10700K OC 5.1GHZ / 500GB SSD & 1TB M:2 & 4TB HDD / MSI Gaming MB / GTX 1080 / 32GB RAM / Win 10 / TrackIR 4 Pro / CH Pedals / TM Warthog

Link to comment
Share on other sites

Well since you are at a loss may I suggest the F-16 at Bench Mark Sims... :music_whistling:
Naaaa, something with more recent core engine and graphics than mid 90ies please. I would actually prefer DIDs EF2000 Tactcom. ;)

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

Again, consider that the developers programming one of the most advanced combat simulations on the civil market, may know what they are doing?

 

Quit twisting suggestions into personal attacks on developers shagrat. Throughout this whole farce of an argument not once have I stated that developers are incompetent, lazy or in any way don't know what they're doing as you seem to think I have. If I have then truthfully they have my sincerest apologies - that was absolutely not my intention. I am confident developers know what they are doing but this is a proposal thread, for how ideas for how a feature could work. AFAIK RAZBAM haven't quite confirmed what is to be made of the sensor just yet, which is why this thread exists - it's not to poke developers about not being able to do their job, it's to propose suggestions, which is the reason and only reason why it exists.

 

You need to store the results with coordinates reference in a table and iterate through ALL the resulting "white/black" spots against the position of the sensor... For every frame that is.

 

You have explained perfectly clear that having too many white spots from lots of objects is going to be problematic for performance. Here's my final suggestion, you implement a system that can detect the white like in this video here

But before you say that no because this is only having to deal with 2 objects at a time. Just read:

 

In dense areas such as dense cities and forest could we lose the IR detail (so that the majority of the objects are low contrast - aren't picked up and factored at all) AFAIK the player isn't going to see the actual image and so only prominent objects likely to be hotter than the terrain can be given a contrasting white. In a city this could be maybe the most prominent buildings (for example) whereas other things (lamp posts, smaller buildings, static map object vehicles, individual trees, etc) are given a faux IR texture that matches the surroundings so that it's not registered as a hotspot for anything considered (tables or otherwise) and thus isn't factored at all for anything, add this to the FOV of the sensor and the sensor's range (using LOD - as you get closer things become visible) means that ALL the resulting white spots is a much smaller number. I'd like to make clear that in clearings where there aren't so many possible white spots so that a performance is hindered we can return to having detail - maybe the odd individual building so that we consistently get a more accurate impression of how the sensor behaves IRL, with possibly the added bonus of at least partially reducing performance impact. Now it needs to pick say the most intense (highest contrast) 10 or random 10 and display them on the HUD with an aiming cue and job done. I'd like to make it clear that objects physically placed in the mission editor still have the high contrast white texture so it is a candidate V and will be put into the table you describe as because it's higher contrast it's picked up by the system that detects the presence of white.

 

It's difficult to explain but hopefully you see what I'm trying to do - reduce the number of white spots that constitutes all the visible white spots in a given FOV by altering the textures of densely packed map objects in certain locations so that the number of potential hotspots is reduced that way there are only say what 20-30 max hotspots occupying the FOV of the sensor and being processed at any one time.

 

The system still isn't dynamic but hopefully with the above should be less of a performance problem. But if after this it's still too much a problem (and yes it does mean a separate texture of the current faux IR texture which means more workload but maybe a potential solution).

 

If I'm honest I'm happy if the Harrier is fitted for but not with this sensor on release or at best a simplified one. (Either god mode see's everything with no false, or picks out some with a few randoms).


Edited by Northstar98

Modules I own: F-14A/B, Mi-24P, AV-8B N/A, AJS 37, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Link to comment
Share on other sites

Quit twisting suggestions into personal attacks on developers shagrat. Throughout this whole farce of an argument not once have I stated that developers are incompetent, lazy or in any way don't know what they're doing as you seem to think I have. If I have then truthfully they have my sincerest apologies - that was absolutely not my intention. I am confident developers know what they are doing but this is a proposal thread, for how ideas for how a feature could work. AFAIK RAZBAM haven't quite confirmed what is to be made of the sensor just yet, which is why this thread exists - it's not to poke developers about not being able to do their job, it's to propose suggestions, which is the reason and only reason why it exists.

 

 

 

You have explained perfectly clear that having too many white spots from lots of objects is going to be problematic for performance. Here's my final suggestion, you implement a system that can detect the white like in this video here

But before you say that no because this is only having to deal with 2 objects at a time. Just read:

 

In dense areas such as dense cities and forest could we lose the IR detail (so that the majority of the objects are low contrast - aren't picked up and factored at all) AFAIK the player isn't going to see the actual image and so only prominent objects likely to be hotter than the terrain can be given a contrasting white. In a city this could be maybe the most prominent buildings (for example) whereas other things (lamp posts, smaller buildings, static map object vehicles, individual trees, etc) are given a faux IR texture that matches the surroundings so that it's not registered as a hotspot for anything considered (tables or otherwise) and thus isn't factored at all for anything, add this to the FOV of the sensor and the sensor's range (using LOD - as you get closer things become visible) means that ALL the resulting white spots is a much smaller number. I'd like to make clear that in clearings where there aren't so many possible white spots so that a performance is hindered we can return to having detail - maybe the odd individual building so that we consistently get a more accurate impression of how the sensor behaves IRL, with possibly the added bonus of at least partially reducing performance impact. Now it needs to pick say the most intense (highest contrast) 10 or random 10 and display them on the HUD with an aiming cue and job done. I'd like to make it clear that objects physically placed in the mission editor still have the high contrast white texture so it is a candidate V and will be put into the table you describe as because it's higher contrast it's picked up by the system that detects the presence of white.

 

It's difficult to explain but hopefully you see what I'm trying to do - reduce the number of white spots that constitutes all the visible white spots in a given FOV by altering the textures of densely packed map objects in certain locations so that the number of potential hotspots is reduced that way there are only say what 20-30 max hotspots occupying the FOV of the sensor and being processed at any one time.

 

The system still isn't dynamic but hopefully with the above should be less of a performance problem. But if after this it's still too much a problem (and yes it does mean a separate texture of the current faux IR texture which means more workload but maybe a potential solution).

 

If I'm honest I'm happy if the Harrier is fitted for but not with this sensor on release or at best a simplified one. (Either god mode see's everything with no false, or picks out some with a few randoms).

 

Ok, I'll try again.

 

You can "visually" as in Black&White/Hi-Contrast edit the textures or the whole screen.

 

Now how do you get the x,y,z coordinates in the 3D space required to put the "v"-marker on your HUD?

 

You need to verify either any pixel against the 3D-reference coordinate system, because it is impossible for a computer code to put a marker onto "something that looks white" in a 3D render.

 

Now you need to first establish the coordinates from the reference point of any hotspot "pixel", or object, check if this hot spot is inside a 3-Dimensional body representing the sensor range. If this calculation results "true", you need to store the coordinates and create the marker, all before you can render the actual frame.

 

Repeat this process for every frame.

 

As Zeus67 mentioned, the active vehicles coordinates are available in the Sim in a table and can be queried...

 

If you can write some lines of code that can evaluate the coordinates from texture pixels on the DCS maps scale without tremendously impacting performance, be my guest. That would be a great achievement.

 

Edit: ...and for the HUD display you would need an additional "edited" render pass for each frame to identify the "white" pixels in the first place, which requires additional ressources.

 

That is why I suggested a randomized system of false positives that would look pretty much like the "false positives" in the real world NAVFLIR where it picks up temperature contrasts in the woods, on grass plains, rocky areas and the water...


Edited by shagrat

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

Ok, I'll try again.

 

You can "visually" as in Black&White/Hi-Contrast edit the textures or the whole screen.

Yes this is what I suggested above so we can get a system that can identify white in the black background. The reason why it should be edited is to keep the number of white hotspots down to where performance isn't significantly.

 

Now how do you get the x,y,z coordinates in the 3D space required to put the "v"-marker on your HUD?

 

You need to verify either any pixel against the 3D-reference coordinate system, because it is impossible for a computer code to put a marker onto "something that looks white" in a 3D render.

Okay so the thing being demonstrated in the video (if you watched it) can't simply be applied to a picture that a TGP would see and then have coordinates generated from that, am I understanding this right? I wrongly thought it was more simple than that because the HUD and the NAVFLIR sensor AFAIK share the same FOV ahead, so if the senor is seeing black and white - pointing out white spots from the black and then exporting coordinates and providing a marker on the HUD, so in essence we're getting the same image on the HUD and the sensor only the senor is seeing black and white and marking the white, where it marks the white should be where a marker should be (if say it draws a box around the white and the coordinate in the sensor is where it appears on the HUD). But if you're saying we can't get a system that recognises the white in a black background visually from a what a IR TGP picture would look like and then provide coordinates to display a marker on that and then replicate the marker on the HUD as V then I understand.

 

Now you need to first establish the coordinates from the reference point of any hotspot "pixel", or object, check if this hot spot is inside a 3-Dimensional body representing the sensor range. If this calculation results "true", you need to store the coordinates and create the marker, all before you can render the actual frame.
Hmm I thought this would be where a quasi LOD system would come in, as in there physically wouldn't be contrast - nothing to check until you got close and the faux IR texture provided contrast for the sensor - this may be takes out range, but maybe not FOV if I'm understanding correctly. But I guess I see the issue, you're telling me that we can't get a system that will detect white with high contrast and then output coordinates so that it can be used on the HUD? Correct? Because now I see the issue more clearly.

 

As Zeus67 mentioned, the active vehicles coordinates are available in the Sim in a table and can be queried...
Yes I understand this but I fear this would make a potential sensor too 'godly' but Zeus also said that wouldn't be implementing system that checks the environment so actually this is fairly irrelevant now.

 

If you can write some lines of code that can evaluate the coordinates from texture pixels on the DCS maps scale without tremendously impacting performance, be my guest. That would be a great achievement.
Indeed it would be, sadly I am no programmer

 

Edit: ...and for the HUD display you would need an additional "edited" render pass for each frame to identify the "white" pixels in the first place, which requires additional ressources.
Would this be the whole concept of recognising the white spots in the first place, because I'm not sure I understand.

 

That is why I suggested a randomized system of false positives that would look pretty much like the "false positives" in the real world NAVFLIR where it picks up temperature contrasts in the woods, on grass plains, rocky areas and the water...
However having it random would mean that markers wouldn't follow terrain or be in places of likely different thermal contrast correct? Meaning for me at least it becomes too easy to differentiate false contacts from true contacts, otherwise it's satisfactory for initial release I would say.

 

I don't know, I just thought if a system that can look at what the sensor is seeing (i.e what we see when we look at an IR image through a TGP) then in a similar fashion to face detection find the white spots, then provide coordinates as you say and then output this data on the HUD as a marker pointing to the white spot would be a good solution if feasible, without going overboard with a full-on IR simulation (which would be even more taxing as the white spots would become dynamic and would have to take into account where the sun is reflecting off of things, the ground temperature, the air temperature, precipitation, vehicle engine state etc and then do all of the above.


Edited by Northstar98

Modules I own: F-14A/B, Mi-24P, AV-8B N/A, AJS 37, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Link to comment
Share on other sites

Yes this is what I suggested above so we can get a system that can identify white in the black background. The reason why it should be edited is to keep the number of white hotspots down to where performance isn't significantly.

 

Okay so the thing being demonstrated in the video (if you watched it) can't simply be applied to a TGP picture and then have coordinates generated from that, am I understanding this right? I wrongly thought it was more simple than that because the HUD and the NAVFLIR sensor AFAIK share the same FOV ahead, but it still means outputting coordinates from recognising the white and outputting coordinates. Have I got this right.

 

Hmm I thought this would be where a quasi LOD system would come in, as in there physically wouldn't be contrast until you got close - this is more alteration of the faux IR texture rather than the sensor - that's what I'm meaning to say. But I guess I see the issue, you're telling me that we can't get a system that will detect white with high contrast and then output coordinates so that it can be used on the HUD? Correct? Because now I see the issue more clearly.

 

Yes I understand this but I fear this would make a potential sensor too 'godly'

Exactly, that was why I supposed to use randomized false positives, as a realistic looking solution. If you watch the chevrons in the video with the HUD, you can see how the NAVFLIR is identifying "random-looking" spots. That's why I think it would work pretty well.

Also the real life system needs a few seconds, but then skips the wrong ones, from reflections, stones, trees etc.

 

Indeed it would be, sadly I am no programmer

 

Would this be the whole concept of recognising the white spots in the first place, because I'm not sure I understand.

Yes, you would need to render the hot spot field of view of the HUD, identify the hotspots and then render the "normal" screen and add the chevron ("v") markers.

Should be a similar frame hit you get from the TGP, currently. If you'll have both NAVFLIR and TGP active the frame hit likely triples.

 

However having it random would mean that markers wouldn't follow terrain correct? Meaning for me at least it becomes far too easy to differentiate false contacts from true contacts, otherwise it's satisfactory for initial release I would say.

No they don't if we calculate the "randoms" from the coordinates from a coordinate reference like the active vehicles in the Field of View.

You would just add some 1-5 random markers to the list, calculated from active vehicles position(s), to a maximum of the 10 chevrons that the system can handle.

 

And though I am not a Developer I guess a list of 10 markers won't hurt the CPU cycles.

 

I don't want to start a brawl here, or try to impose a gamey mock-up.

I really think this is the best possible balance between a "god-mode" sensor and not having the system at all. :dunno:

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

Yes, you would need to render the hot spot field of view of the HUD, identify the hotspots and then render the "normal" screen and add the chevron ("v") markers.

Should be a similar frame hit you get from the TGP, currently. If you'll have both NAVFLIR and TGP active the frame hit likely triples.

I see, I just thought by altering the faux IR texture so that there are less white hotspots for it to pay attention to in any respect. Say like 20 in any given area (so in cities maybe only the large buildings etc and static vehicles, streetlamps etc aren't given a white contrast so it's undetectable by the system same goes for trees in forests (which shouldn't really be a hotspot anyway) would mean that the FPS hit wouldn't be too bad. Oh well, you've made your point and well :surrender:

 

No they don't if we calculate the "randoms" from the coordinates from a coordinate reference like the active vehicles in the Field of View.

You would just add some 1-5 random markers to the list, calculated from active vehicles position(s), to a maximum of the 10 chevrons that the system can handle.

Ahh okay I understand.

 

And though I am not a Developer I guess a list of 10 markers won't hurt the CPU cycles.
Yes, it was the same logic that I used for the system I've been describing, although a randomised system should be way more FPS friendly regardless.

 

I don't want to start a brawl here, or try to impose a gamey mock-up.

I really think this is the best possible balance between a "god-mode" sensor and not having the system at all. :dunno:

Neither do I shagrat, and apologies for being a grumble monster these past few days. As for the best possible system, if the performance issue is really that much of a concern, then I think you might be right.
Edited by Northstar98

Modules I own: F-14A/B, Mi-24P, AV-8B N/A, AJS 37, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Link to comment
Share on other sites

Neither do I shagrat, and I apologies for being a grumble monster these past few days. As for the best possible system, if the performance issue is really that much of a concern, then I think you might be right.

No problem. Same here, hope we can convince Razbam to do a compromise that balances realism and features vs. ressources and cpu load in a feasible way.

 

Maybe, Zeus and the team can come up with something even better, but I hope our suggestions help to give pointers into the right direction.

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

in this video, you can see the hud flir ... randomization is imo not a way to do it, as we can see how it points to the proper objects (air & ground)...

 

Here we go again... Read post #4 of this thread. So you prefer a "God Mode" Pointer that always points out vehicles, only?

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

Here we go again... Read post #4 of this thread. So you prefer a "God Mode" Pointer that always points out vehicles, only?

 

Play nice. I've been tempted to close this thread because I feel it is devolving into a catfight about IR detection.

 

My view on this issue: The "false targets" are a bug and not a feature. The manual explains the issue and how to mitigate it by using filters and the like but in the end it says that they cannot be 100% discarded and thus the pilot must live with them.

 

This is the same issue with AG radars and ground clutter, the difference is that radars have doppler filtering to keep false targets under control.

 

The USMC wants the "God Mode" NAVFLIR hotspot.

 

Now, to the simulation problem. As I said, only active vehicles can be detected by the HS detector. Trying to interrogate the environment in front of the aircraft consume too much computer resources. This is the same problem why AG radar is such a headache. We hit this problem when we were developing the M-2000C Air-to-Ground radar ranging. Even with a high-end computer the FPS could drop to single digits unexpectedly, specially when DCS was redrawing the scenery.

I don't like the random spots solution. If they are random, I cannot control where they are going to be displayed and some could be displayed in the air, which is a big no.

Quite likely the 1st iteration of the hot spot detector will work with active vehicles only. Remember that this is what the pilots of the real aircraft want: no false targets at all.

We will look for a technical solution that can create at least some "false targets" but that will take some time. I have some ideas but I have not shared them with ED and I won't until the time to develop the hot spot detector is at hand.

"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning."

"The three most dangerous things in the world are a programmer with a soldering iron, a hardware type with a program patch and a user with an idea."

Link to comment
Share on other sites

I don't like the random spots solution. If they are random, I cannot control where they are going to be displayed and some could be displayed in the air, which is a big no.

Quite likely the 1st iteration of the hot spot detector will work with active vehicles only. Remember that this is what the pilots of the real aircraft want: no false targets at all.

We will look for a technical solution that can create at least some "false targets" but that will take some time. I have some ideas but I have not shared them with ED and I won't until the time to develop the hot spot detector is at hand.

Hi Zeus, that was why I suggested to use the active vehicles as a reference and just randomize x,y coordinates and only by a value to scatter them so they don't appear in the sky...

 

If I understood earlier, the false hotspots need a few seconds (movement) to be filtered, but usually they will be filtered.

 

It may not be perfectly representing the Real Life system, but it would mimic the distraction for the pilot pretty well.

Shagrat

 

- Flying Sims since 1984 -:pilotfly:

Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B  | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)

Link to comment
Share on other sites

Hi Zeus, that was why I suggested to use the active vehicles as a reference and just randomize x,y coordinates and only by a value to scatter them so they don't appear in the sky...

 

If I understood earlier, the false hotspots need a few seconds (movement) to be filtered, but usually they will be filtered.

 

It may not be perfectly representing the Real Life system, but it would mimic the distraction for the pilot pretty well.

 

To expand slightly, random spread around a vehicle with a bias toward the boundaries between terrain types, or toward bodies of water, would probably get you 80% of the way there. Both of those seem fairly easy to detect, given that the Viggen's ground-mapping radar can display them with no dramatic loss in performance. You could even use it to display random target without a vehicle ahead, as long as you know the sensor's FOV—again, it's a close parallel to the ground-mapping radar we already know to be possible.

 

It isn't quite as random as random-around-vehicles, so that a skilled pilot can more quickly evaluate target markers based on their location on the terrain, and mimics the real-world behavior a little better—there are false targets even when not looking at a vehicle, and it seems to me from the videos I've seen like they're often arranged in lines along thermoclines.

 

Of course, ED might be cooking up something fancier, as far as querying scenery objects goes, for the Hornet. Who knows?

Black Shark, Harrier, and Hornet pilot

Many Words - Serial Fiction | Ka-50 Employment Guide | Ka-50 Avionics Cheat Sheet | Multiplayer Shooting Range Mission

Link to comment
Share on other sites

I don't like the random spots solution. If they are random, I cannot control where they are going to be displayed and some could be displayed in the air, which is a big no.

For what it's worth, the V's can appear in the sky. It's not the most usual place but it can happen.

See this video:

at 23:09

FLIRSky1.png.16c12d1ec8444bdefb11da02b756b5c5.png

FLIRSky2.png.21914879dc8879ff91fc27bd8210dd71.png

Link to comment
Share on other sites

For what it's worth, the V's can appear in the sky. It's not the most usual place but it can happen.

See this video:

at 23:09

 

Based on your images that means that the hotspot detector can be used to try to find enemy aircraft in the sky.

"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning."

"The three most dangerous things in the world are a programmer with a soldering iron, a hardware type with a program patch and a user with an idea."

Link to comment
Share on other sites

Based on your images that means that the hotspot detector can be used to try to find enemy aircraft in the sky.

I don't know, only someone with first hand knowledge on the sensor could answer that. I'd naively say yes, as a FLIR is nothing more than a camera which detects IR gradients, so in theory it would be possible.

But that's really something to be checked.

There can be numerous factors, maybe the sensor will only react to a S/B ratio that cannot be obtained with an aircraft, or is only strong enough with land vehicles, lots of exhaust gaz, etc, large spots (like a thermal in the sky).

Also, I know that depending on the maturity of a system (in general), there can be a rejection of targets that would appear above ground, maybe the version that is on the video doesn't have that but the one you're modelling has it (if at all).


Edited by PiedDroit
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...