Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Fri13

  1. Those HUD circles are limitations of physics. They still exist there, but in those you have camera further from the HUD why it looks like larger. The CRT projector inside HUD has own collimator corrected design eye, and it can't project picture larger area than the projector glass is under the HUD. Still those HUD projections are way too detailed and high resolution. Lacking the proper glow and diffusion as well limited design eye position to see it.
  2. The + variant opens lot of things for those who enjoy from Harrier with TPOD as they don't anyways use DMT. So capability to use A/A load becomes suddenly real challenger for the Hornet, as Harrier still operates from the FOB if required. The ground units changes will be affecting every unit there is. Hopefully ED gets dozens of different AI's developed that are applied to different units and each behaves through sub-patterns based situation it is facing (enemy under fire, friendly under fire, unit itself under fire, under threat etc). This will make Harrier even more fancy with the A-10C and helicopters etc that are main benefactors. Fighters at medium altitude will/should have less capabilities find ground units and engage them. So real mud movers job becomes really important with ground units itself to support. It is same as in Hornet. Not really useful for anything else than detecting ships at sea, and as well harbor or large city and similar very distinct landmarks. You are almost as good with a EHSI with digital map. What hornet has now, it would be with Harrier+ as well.
  3. @Zeus67 Sincerely still waiting a reply. But if you are in the holiday then please answer after that.... It is better get things done correctly once than revisit changes to redo parts, why Razbam should provide these evidences for questionable features.
  4. Mi-8 is maybe the most enjoyable helicopter for cruising. You get smooth and steady flight, turns and approach with it, but the transitional phase can be scary at first timers as it shakes so much. It is amazing how much troops one can carry in one, making it nice air taxi. Just waiting to get proper infantry modeling. I don't like so much it with gunpods. It is more of a rockets and saturate the area and bug out. I am hoping we would see Petrovich in it as well. Call targets, threats and take control now and then when needed. And most importantly handle the radios and navigation. I don't know how as many hours (closer to 100 than 200) I have spent flying it without outside visuals, inside a clouds or in mist and over the sea, searching for ship to land and deliver goods. It is like flying a truck. I hope that in future we get to see good Mi-8 + Mi-24 combo deals, so it could be purchased as gift. Funny thing is that once I learned from where does the distinct weeping sound come from (the single bar) I can't great it out of my head. So custom to hear it that I think it is sounding on every helicopter.
  5. ECM was very much a thing back then. F-10 was USMC main ECM aircraft. Succeeded by the A-6 in 1963. The F-4's had ECM pods, but F-4E had integrated one in hump behind canopy. The F-4F had it in under nose. The F-100 and F-4's had ALQ-71's for ECM. Even when the F-105 Wild Weasels didn't turn them on, while required to carry one. But it was not capable to defeat the S-75 proximity fuse. Until it was defeated in 1966. “The intercept was perfect,” Dale Weaver, the senior Ryan contractor on the project later reported. The 147E got a complete set of radar guidance and proximity fuse information. The mission even successfully recorded the force of the blast wave that destroyed the drone. The Air Force used the mission data to develop a warning receiver that fed into a jammer. This electronic shield would prevent any SA-2 missile from hitting any aircraft that carried it. But how could they test such a device without endangering a pilot? Bring in another drone. Ryan engineers retrofitted a single Model 147, designated type 147F, with the jammer. This arrangement was known as Shoehorn, and project engineer Robert Schwanhausser said the massive electronic gear had to be ‘literally shoehorned’ into the small drone. The U.S. Navy flew it over Vietnam on missile-baiting missions in July 1966, and the device attracted at least eleven SA-2 missiles, all of which failed to bring it down during several missions. It was finally downed by the twelfth. Shoehorn became the backbone for the AN/APR-26 countermeasures set fitted to U.S. aircraft, including the B-52 Stratofortress, F-4 Phantom II, and C-130 Hercules. The AN/APR-26 would warn a plane that had been spotted by radar, giving the pilot the chance to change course and get out of the air defense zone. It could also detect when the radar locked on, indicating that a missile was on the way so the pilot could carry out evasive maneuvers to throw the missile off. In the final phase of the SAM attack, when the proximity fuze was activated, the warning tone would increase in pitch and change from a continuous tone to a warble, telling the pilot it was time to do or die. “The most important factor in using jinking maneuvers has always been saving your most radical turn or dive to the very last moment before the SAM arrived,” says Miller. “The warble was the signal ‘jink hard NOW!’” And finally if the missile got too close, the system’s last line of defense would attempt to defeat its proximity fuse. But crews weren't happy with a device that might detonate the missile when it was already within lethal range, Miller says. Pilots would often leave the automatic jammer turned off and trust to their own skill With all of these defenses working simultaneously, U.S. aircraft survivability ratios against the SA-2 began to climb. In 1965, the year before the CIA’s successful mission, SA-2s destroyed one aircraft for every four missiles fired. By 1967, it took closer to 50 missiles." https://www.popularmechanics.com/military/aviation/a34386117/suicide-drone-cia-sa-2/ The S-75 went because ECM from 100% hit probability to slow straight flying target to less than 10%. The missile capability for 10-11g maneuvers means it is just capable to intercept a 3-4g target (4-5g at best). And you need to time it properly. You just don't go "diving under it" as you need to perform a proper maneuver at proper timing, but prepare for it by flying to get missile attitude proper one for time to come. Today S-75 is almost useless a 9g fighters that has situational awareness, but even more it is useless against modern ECM systems. And similar thing is with all missiles, where maneuvering is key thing, and preparation for it. But it can become too difficult and challenging.
  6. If it would just work so, that on the spotting the threat that is attacking you, first you take cover before you engage. There are rules that does do opposite, like MBT's will quickly learn that they have armor and firepower to deal some sudden threats quickly and so on take a offensive position. But it requires that personnel knows that there is no threat to be shot at, or there is no cover where to go. Why smoke screens becomes valuable tactic to generate concealed area where you can at least avoid getting aimed at so easily. But it is totally different to be a inside APC than it is to be inside a MBT with frontal armor toward threat.
  7. Assisting features should be Opt-In, and not Opt-Out. The boxes became annoying as they take so much space, but they as well showed information like a IHADSS would. Already the crosshair (that so reminds me from Klingon emblem) is so accurate by default that it is little too accurate.
  8. Likely that same bug, system should adjust itself using INS to compensate aircraft movement related to the mapped area. This was one reason I didn't use the freeze feature as it was totally useless for thr purpose, fly to get detailed enough picture and freeze frame, turn away to safe and designate target from that frozen picture and then turn around back target. The EXP3 is less accurate than EXP2 at close ranges, that is the technical limitation. But not it that your radar builds a inaccuracy between start of the sweep and end of a sweep by amount that is relative to aircraft position between those. I talked about this at the time of release how your aircraft turning between sweep increased the error that you might got a designation totally wrong place. As it was like the system didn't have any correlation from where it did capture the picture and where it is itself related to that picture. There is a risk that radar would become too capable and accurate as it is limited in simulation (ie, doesn't render trees and hence no tree shadows) as terrain is just ground and buildings. And this gives you possibility do what you shouldn't be able do, detect individual vehicles etc.
  9. And please notice the "design eye" limited viewing area, as well the HUD drawing circles sizes. The proper sizes are mentioned in the manual in mils, and updated changed in newer manuals that what has been changed. It isn't classified that how something is shown on the HUD. It is what information there might be (like weapon range, coordinates etc).
  10. 1) The hand scaling is wrong. Everyone should put hands fingers open front of them and then touch same fingers together from tips. Like index finger, thumb etc. The gloves at now are like 2 cm longer with fingers. The glove 3D model should scale to the fingers positions and palm center so they would always match when finger tips touches each others. 2) Tracking requires tweaking. It is now very jumpy and gets very easily confused with fingers. As said, we need calibration option as I have now in short test period had the laser beams pointing 90 decrees wrong direction. 3) Deadzones. As LASooner suggested that when controls are moved then have virtual hands turned off or snapped to them. The potential is that one can just operate cockpit without any extra work. Just reach to the button and be done. 4) Just like with a touch controllers, remove the laser pointers. Just forget them. Make the finger tips (thumb and index finger at least) the active parts. So you need to just touch things. The icing on the cake is that switches would move to the opposite direction from where they are touched, or each time switch the status. So if switch is left/right then pushing from right would move it to left. But easiest is just to make it each click (and pause between activations, like 200 ms) to flip switch or button. This so that switch doesn't start repeating itself at high speed. If we don't have laser beams, then we don't need to worry about accidental touches either. So make the laser beam optional. 5) Design the system to activate and disable those without mouse and keyboard. I don't have mouse or keyboard anywhere near the flight chair so it was annoying jumping back and worth to disable touch controllers and get hands working, and then get even menu opening. What leads to gestures. 6) We need some clear intentional gestures for some basic things like menu. Easiest way really is to have like other VR games that you have watch or something on the wrist and there you have buttons for menu. So menu is #1 to be accessible. Want to make very fancy thing? Make a wristwatch that will show the current (real) time. Touch that watch with another hand for DCS Menu. Even a activation/deactivation could be done with it, like spread fingers to toggle that hand On/Off. Or require to keep index finger straight or thumb up to have that hand temporarily active/visible, otherwise disabled. Where I was disappointed was that I couldn't get the finger tips to work. As only things that really matters for me to be able operate cockpits is steady hands and capability use finger to push things. That requires accuracy so fingers are properly tracked like in the Orion demos. Very good potential to offer a "controller free" cockpit. Requires work but with effort and good designing it will succeed.
  11. I didn't say that they are realistically done, I said that the realistic effects are applied. I didn't either talk about reflections, so don't build strawman arguments. If you think that every single canopy is crystal clear and they do not have any light diffraction and bending characters and no scratches and such, and you want to use it as argument then it would be foolish. We can't currently have effects like DOF calculations for the eyes, but such blurring should regardless be done for all canopies as you are to look throught them instead at them. Again, you are making just hyperboles without any substance. If you are not willing to read and try to understand, it is your fault. Your argument now is solely based that no one else understand that DCS World is just a simulator and not a reality. And you are insulting everyone's intelligence by doing such hyperboles that "only real thing is realistic".
  12. Fri13

    TPod Slewing

    If you already have a TPOD then forget the DMT. You can even turn it off behind stick misc panel (in reality you don't turn it off as then DMT seeker is not locked in place electrically, making it possible be damaged regardless gimbal physical dampening protectors, but it is not modeled in Harrier) so you don't use it at all. Just select waypoint on EHSI, press DESG to designate it as target. Press slave in the TPOD page to get it there. I haven't used TPOD for couple updates now with a buttons, but speed did change when last time I used. There just wasn't a "acceleration" to begin the movement IIRC. But it was not fast that you can't do it. Sounds you are in INS mode, as TPOD by default should be slaved to TD that INS is moving at high speed. So SSS Down 2x to get TPOD mode. Just check you are not in INS designation.
  13. Would be nice to have mission editor automatically add as well the ground units proper radio channels and frequencies. And if flight has the take-off and landing waypoints then have those prioritized by the system player doesn't need to search them from the common list that includes all (that should be one of the last pages anyways, as if you are flying ie. North Caucasus then you don't care what is at the south Caucasus 400 km away from your operation zone). So smart dynamic allocation and ordering for those is needed in all modules anyways. The Mi-24P is designed to operate with the ground forces, so in the proximity based the waypoints and units there their frequencies and channels should be as well listed with the type/name of unit. This is important in the future dynamic campaign where you have missions to go support some units in quick missions using the same engine etc. This so that you know you can get contact to them, talk to them, and they talk to you guiding you to targets.
  14. That is for one to do if so wanted. There are various levels of realism. Like example we have a dirty canopy or fragmented/scratched glass instead perfectly clean. It doesn't mean that one needs to have a real glass to get the realistic effect from it. Same thing is with the controlling the aircraft. There are four different levels: 1) Keyboard or Joystick use where all systems are used with a device that doesn't belong to real cockpit, and doesn't require to know where the functions are in real cockpit. Example a landing gear lever is set to joystick button so it can be easily operated when landing, without requiring to look at it or reach to it. The unrealistic part is that player use functions with joystick that shouldn't have them or use a device like keyboard that has all of them. 2) Mouse use where all systems are used with in their real cockpit locations and corresponding order. Player needs to know where the function is and how to operate it (is switch up or down etc) and this is limited by realistic modeling of the cockpit. Because player use display and mouse and mouse cursor is locked to inside visible view, it requires that player will always look the function (have it inside FOV) so it can be clicked, and player can't have method to click something without looking at it (like landing gear lever by reaching it and operate it by touch). The unrealistic part is that player hand is always put on the control device that is elsewhere than the real function is inside the cockpit (like using right hand to click a landing gear lever on the left side on the cockpit). 3) hand controllers, hand trackers. This is the most realistic method without building a complete cockpit with everything. It is best compromise for VR because it allows to fly all aircraft by using corresponding 3D modeled cockpits virtually. Player is required to move hands around the cockpit, like if they can't reach a button on right side with left hand, then they need to use right hand. The downside is that there is no physical feedback to touch, so it is based to audio and visual feedback that something did happen. But example hand controller supports to operate landing gear lever without looking at it by knowing that "it is just next to my kneecap" and so on allows to focus for landing and trigger function by reaching it with hand like in real thing. Learning is second best this method because player needs to learn where everything is and quickly learns the order as well, and the limitations/advances in designs because everything is limited by the real hands reaching capabilities. Downside is that you might need to grab a controller first (faster than moving hand on mouse) if not having a small device attached to hands. The hand tracking is almost best of these, but often requires to have hand inside HMD FOV to be usable, but nothing is required to have in hands. 4) Simpit. That is the ultimate. Skilled builder has everything properly done and wired, for the scale so that VR matches with the physical. This doesn't mean that physical needs to be painted or anything, as long the proper function is at proper place and it is proper kind (button instead a switch if a real one is a button). But downside is that every aircraft requires a own unique physical simpit. So it is limited to 1-2 per room. If someone would be smart, they would build a flexible and modifiable platform that has just plates that you can switch around and each has proper buttons and switches and all. These are mounted on "arms" that are adjustable from bottom of the seat. So they can be moved and tilted to proper positions of the real cockpit. A panel size doesn't need to be proper shape or size to have functions in proper position, so they can be larger or different shaped (like square instead triangle). But it would take time to swap the plates (each with on USB micro controller, so cable to USB hub and to PC) so it is cheap and valid option for VR use. Argument that because real thing can't be reached without the real thing and hence no realism should be ever be tried to improved or even reached in other parts where it could be possible to be made experience as such is just invalid. This is not about self-control where player is willing to keep their head inside canopy, but it is as well a multiplayer anti-cheat where other player will put their head out to see what they shouldn't see, overcoming restrictions or reality limitations like dirty canopy by looking through it. This is similar thing as how g-forces should be restricted for TrackIR, that higher the g forces are, then camera view movement and speed is restricted. Even forced to center. So that player that is pulling 8-9 g can not do so by looking 180 degree to their rear because that is unrealistic. The similar restriction can be made for VR users by making the g forces effects (that should anyways be improved with blurring, desaturation etc) surround the player head and only usable area to see something is the front part. Call it "virtual blinders". So here trackIR user camera is pulled to center and effect is applied, the VR user effect is applied by blocking view to around him than ahead. Or even making effect much worse on both when they move their head away from the center position so further you are away from center and longer, then worse the recovery is from g forces. In reality in high g forces you are trained to breath, hold body, head etc properly. You don't swing head around like a 5 year old child with sugar overdose. Community agreeing with the values and then applying a adjustment levels for it so some players/servers can disable it and everything could be put on same level. Because you can't feel and experience the real g forces, it doesn't mean that it is stupid to try to make the systems that would simulate those experiences....
  15. As important is the meat jacks inside vehicles. If you are under attack and you happen to be a truck driver middle of open field, you better run away as that vehicle will have bad time. And it doesn't move anywhere until someone dares to get inside to drive it. Be inside a MBT and those shells are knocking hard at you, telling that you have been seen and you are a target and there can be coming something far more nasty in any second. Why you drive like a F-1 driver to somewhere safe with smoke screen and all. If you are in a lightly armored vehicle that gets near by hits, you are doing same as you don't want a single hit on you. You drive away, far far away to deep safe forest to be safe by being hidden. Getting shot is getting shot. Even without armor penetration or K-kill, you can get injured crew or people. You need to protect the vehicle regardless all the armor, as damaged one means you can't perform your duties and task, and you just risked your fellows changes to fight. So protect the vehicle by all possible means. That means the meat jacks inside a vehicle will have other things in their mind when they are under fire, than just chill or fight like nothing is happening.
  16. The missile should have 0.6-0.8 propability to destroy target. That is very high. But I think it should be marked that it is with inaccuracies in guidance by not having a direct hit, but having missile fly target to it's proximity fuze triggering warhead detonation and fragments to destroy the target. This is not modeled in DCS as you can't give missiles the realistic hundreds of meters at high altitude / ahead of target kill ranges to tens of meters on low altitude. And we have no ECM at all against those missiles proximity fuzes! You can not avoid getting that missile near you, but you can delay or jam the warhead detonation on modern ECM suite so that missile becomes just a "telephone pole" that can't kill you unless happen to hit you directly. Again one of these reasons why ECM is so critical missing part in DCS that this kind factors doesn't exist. The S-75 system was excellent back then, it was the ECM system that finally protected fighters from it as it made it "toothless". Why even a A-6 could evade them.
  17. IIRC Wags said that top one is sync for automatic lead calculation for a moving target. Keep moving crosshair on moving target few seconds and it should correct for the lead. It is odd it is called "Sync".
  18. While streaming works for many users and for many situations, it is not suitable than more of a cockpit use. What we need is a instructor that sits in the same cockpit but doesn't necessarily need to control the aircraft, but at least would have few things. 1) On request a trainee can give controls on instructor so proper maneuver can be shown. I would say this requires to be on same server. 2) Have a way for instructor to click the cockpit elements and the trainee would see a "action box" for it with proper numbers to follow the order. 3) Have a virtual view box/dot to other, so both can see where other is looking so "Your another left" or "1'clock high!" in dog fight and ground attack are eliminated. I am not so sure about controlling other aircraft, as it is IMHO more conflict with a trainers purpose to share controls and should be one main reasons to sell license by offering the second seat (front) for free by anyone who owning the license is willing to open front seat for someone else while owner sits at rear seat and has override for controls. But means to click all switches, buttons and such and get them not just visible with the glowing golden boxes, you would get them in replay as well. With delay like 3 seconds. This would help many as well to make the tutorial guides in video or track file when all the clicks are recorded and played back. Combine it with a virtual view box / view center dot that would show where to look, it would help many not just in real time, but in track files and as well in tutorial videos. We already have almost all these things, they need to be just applied slightly differently. Like why it is required to use scripts to make instruments clickable in wanted order etc, when we have mouse to record the clicking order?
  19. Fri13

    TPod Slewing

    You would be using TPOD mainly for observation/recon (VCR), video datalink communication with ground forces and designation for laser weapons for the flight. It would be so nice to have AI capable to perform the attack on your painted target, or vice versa. So example you could be designating target for wingman and then it uses DMT/LST to lock on target and then engage it with wanted weapon. Or AI will designate it to you and you use DMT/LST to find designated target and then engage it. Not everyone in the flight needs to have TPOD equipped. And if there is JTAC on the ground, then you don't need it at all as you have DMT to find their designated target and engage it with any weapon. Only challenge is that you can't visually ID target with DMT/TV at night, but JTAC is there for that reason anyways. Considering how easy it is to use DMT to find a laser designated target (especially after JTAC has sent CAS call and you accept/reject it until wanted) by just SSS Aft and have it automatically search designation point (without TD it is 5 nmi ahead of you, unless you manually move Wide/Narrow scan box) at least inside HUD so if you know where to point your nose then you find the spot and have it designated, delivery and all automatically given to you so all you need to do is to just fly and release weapon. Re-attack becomes easy as after the weapon release you are suppose to have the attack line drawn on EHSI as course line (not implemented), and anyways after DMT gimbal limit reached you get a DMT display swapped to EHSI compass for re-attack guidance automatically (not implemented). After each attack run the system swaps from DMT to INS automatically (not implemented) and only delivers a attack run to TD. There is no more automatic laser spot search or TV contrast tracking once you turn back toward TD inside DMT gimbal limits, but EHSI compass will turn back to DMT/TV video without contrast lock (open large crosshair) and you can decide do you need LST (press SSS Aft once to switch from INS -> LST) or do you need TV (SSS Aft twice) to sweeten target or track a moving target using TDC (there is mention in the manual that in INS mode if the TV is not open, then SSS Forward would open TV mode on TD when inside HUD). The ARBS/LST mode is not available if proper laser code is not entered after Weight-On-Wheels sensor is released (take-off). This is not currently implemented but laser code is incorrectly entered as 1111 (should be only for Laser Mavericks and TPOD as default, but Mission Computer laser code is zeroed out), and so on DMT/LST is always available. The DMT/LST mode as well requires pilot to transfer designation to TV or INS mode if Laser Designator is required to be switched off or moved long distances, as once the laser tracking is lost the DMT/LST returns to searching and TD is lost. This is not modeled at the moment but the TV is contrast locked and the LST will jump to where ever laser spot is instantly shifted. Sudden shift or LOS to dot blocked should be avoided situation and pilot needs to transfer designation before that.
  20. Fri13

    TPod Slewing

    Are you controlling the TPOD, or are you controlling INS (HUD mode, Sensor Select Switch (SSS) Forward enters to INS mode and swap as well between INS and MAVerick)? The harrier has currently incorrect INS implementation that allows you to move target designation (TD) in INS mode when the TD is outside of the HUD. So it is easy to be accidentally in the INS mode when you forget to enter TPOD special control mode by pressing SSS switch Down (not Aft) twice under 0.8 seconds period (you exit TPOD control mode same way). When you are in the TPOD mode, then TDC and SSS are assigned completely to TPOD and you can't control anything else until you exit the TPOD mode. ------------ The Harrier was not designed to have TPOD from the begin as it had DMT and when the Navigation FLIR was added, the Harrier became Night Attack capable, after being just a Day-Time capable. Then much later the TPOD was added to the Harrier, and it was only possible be added as emulating TPOD as Maverick missile. This is making severe limitation as only one video from Maverick is allowed to be open to computers, so you can't operate any maverick simultaneously with the TPOD. Why you have the SSS switch as special mode with 2x Down Press to command avionics to only operate the TPOD. As the TPOD is still operating as a sensor that can be continually slaved to Target Designation, you can move TPOD when in INS mode. The Harrier pilots are trained to use the DMT as primary sensor in N/A variant and Radar in the Plus variant in a method where SSS is used Forward or Aft based what the Pilot is looking. So when the Pilot looks the HUD, then pilot press SSS Forward to transfer TDC commands to HUD. When the pilot looks MFCD, then SSS Aft to transfer TDC to Radar or DMT. The TPOD is "afterthought" and its integration is just a bonus for buddy-designation and observation etc. Even in the N/A Harrier the DMT sensor is the only sensor that can calculate the target altitude and so on slant range to target. The TPOD can't do it. The ARBS sensor is extremely accurate (more accurate than a Radar variant radar ranging) when you have TV contrast locked (requires such) or Laser Spot Tracking (automatic) and you fly Harrier a while to give it angles to calculate slant range. But your altitude from ground has small error and it can calculate it from anything like trees, buildings or hill and not the terrain (not modeled correctly) and this error increases based your altitude, so there is larger error at flying 3000 ft than flying at 1000 ft. But the answer for this in Harrier is the barometric and gps altitude measurement to complete the bombing triangle.
  21. You didn't miss anything. Older manual applies as evidence, until a newer information explains the old information to be incorrect by either stating it directly or explaining the new behavior on the same thing. If someone can provide example 2-3 older manuals, and each of them states feature X that generates the logic in the design/system, then it should be used as it is only evidence there is. And "no evidence" can't be used as evidence to state otherwise without circumstances changing in the whole subject that would invalidate the whole previous documentation (example all previous ones are for monochrome displays and new one simulated is colored one), and even then if limited newer manual doesn't say anything about it, then newer doesn't apply as evidence because it doesn't mention anything about changes or subject. If someone doesn't have evidence for any direction, then there is nothing else than educated guesses based to hypothesis and logic. And it is illogical that multiple previous versions feature would have been removed without any other changes to whole topic. In this case: All other colors, symbols, drawings etc are maintained but one threat symbol that is to save pilot life and symbolize high risk situation is removed and rendered in a normal condition manner where pilot doesn't get a visual indicator of the dangerous situation. SME's are nothin more than a Party Testimonies. If they are only evidence there is, then it is required to have multiple sources (3-4) where each source is confirming the behavior with proper questioning ("Please explain what colors different MFCD symbols has" vs "Are the threat rings colored with some color like yellow?").
  • Create New...