Jump to content

Pierre Sprey & Lt. Col David Berke debate


Hummingbird

Recommended Posts

  • Replies 241
  • Created
  • Last Reply

Top Posters In This Topic

https://forums.eagle.ru/showpost.php?p=3200212&postcount=4003

 

After 5 seconds of listening to Sprey, I stop listening to what he was saying and started to count "you know" after 11 I lost interest on that as well. I wish they stop bringing Sprey on interview, his knowledge and opinions are meaningless and outdated. He never designed any aircraft, he helped write the requirement for the A-10 and in most account I have read, he had little to do with the requirements for the F-15 and F-16 set by Col Boyd and the "Fighter Mafia". Dr William M Curtis III was the Chief of Aero & Diirector of Engineering for the A-10. Harry Hillaker was chief designer for the F-16 AFAIK. I would have love to hear the Col. Berke interview without Sprey in it.

Edited by mvsgas

To whom it may concern,

I am an idiot, unfortunately for the world, I have a internet connection and a fondness for beer....apologies for that.

Thank you for you patience.

 

 

Many people don't want the truth, they want constant reassurance that whatever misconception/fallacies they believe in are true..

Link to comment
Share on other sites

 

 

You didn't notice that quote is full of personal opinions, ad hominems and unclarified claims and assumptions?

 

If public wants to attack to some interview or claims, it is better to be done correctly and only to topic.

 

Example:

 

After 5 seconds of listening to Sprey, I stop listening to what he was saying and started to count "you know" after 11 I lost interest on that as well.

 

That is a personal opinion, shows the lack of respect from the person toward anyone else who he doesn't agree as he doesn't even spend the time to read/listen what other has to say. He has made his mind and doesn't wan't to even expect that other would have valid points.

 

I wish they stop bringing Sprey on interview, his knowledge and opinions are meaningless and outdated.

 

Again a personal opinion and wish. Bold claims about other person knowledge without any information to back it up why they are meaningless. And then argumentary error by combining subject knowledge to be same as his opinions, like they would be the same and then even attack to other person opinions in such manner.

 

 

He never designed any aircraft, he helped write the requirement for the A-10 and in most account I have read, he had little to do with the requirements for the F-15 and F-16 set by Col Boyd and the "Fighter Mafia".

 

Again personal attacks without source or facts. All based to assumption "in most account I have read" that is like quoting a wikipedia or some other shadow writers books. It doesn't matter what the public information is as it gets so quickly twisted when there are no valid sources and information to back things up.

 

And then logic errors are as well "little to do with the requirements for the F-15 and F-16" as it goes that if someone comes with just an idea to start something and set a target goal, it is his idea and whole project is his even when the person wouldn't never even entered the aircraft factory or done any calculations. It is the leadership meaning that they are setting the targets or doing the decisions and not the executive step.

 

It is as well a personal attack by undermining the career, influence and tasks that person has done by claiming with doubts and personal opinions that person doesn't matter.

 

Dr William M Curtis III was the Chief of Aero & Diirector of Engineering for the A-10. Harry Hillaker was chief designer for the F-16 AFAIK. I would have love to hear the Col. Berke interview without Sprey in it.

 

Again, false logic. Let me give you the analogy; chief designer is like a company CEO. They only execute things, they don't plan things like the company board does. The true power and ideas are in company board and specifically board president.

 

Even when you are the chief designer or director for specific project execution, you are still responding to the company and in this case as well to the congress (USA military project) as well the very small group of people who is giving the requirements to those people what is needed to be done.

 

So, here comes the assumptive thing from my part. If the Pierre Sprey has been in the meetings among people who are deciding in the budget, drawing to the board in the meeting for the requirements of the war tool among congress women and gentlemens and among generals and other bureaucracy that what is being ordered and wanted.... He is the deciding person, not the chief of design etc as they are just executing what the upper level are demanding.

 

 

I don't know Pierre Sprey... Never met him. I only know some of the interviews and short ones etc. But I know very well how so many F-16, F-15, F-35 etc fanatics are attacking against this one guy and IMHO most often with total fallacies and only adding more misinformation to the web for others use.

 

So what I would like to know, and what the whole fanboy community etc deserves.... Is that someone would come with the facts that show, that Pierre Sprey has never been in the meetings where the ideas, demands and requirements have been decided for the F-16, A-10 and so on....

 

That he has not been there to advice, give his opinions and other knowledge to anyone who has been doing those decisions etc....

 

He doesn't need to know anything about the engineering, only to know what is required to perform better than the other.

 

Here is more to the truth than fiction:

 

 

That is how the things are done.... Almost in every level of the world. There is always someone who is making the decisions in the big scale, regardless is it possible or not:

 

 

 

It would be better to anyone if people who so say they know very well the Pierre Sprey not being involved to F-16, A-10 etc projects designs and requirements... Would present the facts instead their opinions, assumptions, and black & white claims that there ain't heavy influence to decisions over the titles and job descriptions...

 

And it would as well be more honorable and respectfully toward public person that their whole personality ain't tried to be discredited.

 

People do mistakes, they do wrong decisions and so on and on... And everyone has personal opinions etc. But at least we should try to have respect to even to people who will make fool about themselves in public. As everyone deserves that they are heard out and their information is tested, validated and considered and memorized for the purpose that in some other situations it can be very valid one....

 

That is the whole point to have a small groups that are there to assume, test and try a opposite ideas and plans, as you need to be prepared for them. That was the whole point of the Navy Fighter Weapons School and suchs where the small group do research, studies and find the opposite tactics and ideas and then they are put in use as education.

 

So even if the person like Pierre Sprey has been in such group as claimed (being against F-15, M1 Abrams etc etc that you can read from the web) it doesn't mean that he doesn't have knowledge or his opinions are "outdated" etc.

 

Then there are always the business of information, you don't take any of your real tactics or capabilities to a possible enemy training exercise and reveal them.

You want to show the potential and capabilities up to the point that you show up as a strong and capable, but you don't go full in if you can.

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Link to comment
Share on other sites

 

So even if the person like Pierre Sprey has been in such group as claimed (being against F-15, M1 Abrams etc etc that you can read from the web) it doesn't mean that he doesn't have knowledge or his opinions are "outdated" etc.

 

 

His opinions are outdated, incorrect and asinine.

 

Here's a good one:

 

 

1:10 in..he goes on a rip about how the F-15 was bloated with "two engines, a big radar, everything the air force wanted" and calls it junk.

 

The F-15..the most successful air superiority fighter in history.

 

Just watch the video..he's an old fool trying to desperately pretend his idea (IE the very very original F-16) is still the way forward. No multi-role, no real radar, no ability to be useful the other 75% when enemy fighters aren't a factor..


Edited by RaceFuel85
  • Like 1
Link to comment
Share on other sites

Sprey has only a couple of valid points, based purely on logic (high wing loading, poor measured STR etc), but he undermines any credibility he might have had by completely ignoring/dismissing actual facts presented to him and drawing a ridiclous final conclusion, i.e.: The F-35 is junk cause it can't turn on a dime. Defending his final conclusion he's basically saying "I'm right, you're wrong, just because", and that just doesn't hold up in any court.


Edited by Hummingbird
  • Like 1
Link to comment
Share on other sites

Sprey has no credibility and not seeing to many valid points there.

 

The guy is no idiot but he specializes in taking everyone else for one - since leaving the Pentagon he has been aligned with various groups opposed to military spending - so for better or worse he basically wants to influence certain people to redirect those funds to where his buddies need them - he certainly has no interest in providing the US with the best tactical fighter jet.

 

He has written various papers since the late 80s - some are on the net and most can be found at a place called POGO. Read any of the papers they are mostly full of total and utter BS.

 

This is from THE REVOLT OF THE MAJORS: HOW THE AIR FORCE CHANGED AFTER VIETNAM by erx USAF bod Michel III.

 

While working on the F-X, Boyd met Pierre Sprey, a weapons system analyst on the

OASD/SA staff, whose background was similar to Enthovenís but much less

distinguished. By his own account, Sprey was a dilettante with an engineering degree but

no military experience. After graduation from Yale, Sprey became a research analyst at

the Grumman Aircraft Corporation for space and commercial transportation projects. He

came to OSD/SA in 1966, where he declared himself an expert on military fighter

aircraft, despite his lack of experience. Sprey admitted being a gadfly, a nuisance, and an

automatic opponent of any program he was not a part of.45 He was opposed to many Navy and Air Force tactical air systems, especially the Navyís Grumman F-14, because of its

size and complexity.

 

------------

 

In mid-1969, Sprey mounted a formal challenge to the F-14/F-X. In the name of the

OSD/SA staff, he drafted a ìDraft Presidential Memorandum [DPM] on Tactical Air,î

suggesting both the Air Force and the Navy adopt the VF-XX/F-XX concept, claiming it

would allow the services to double the size of their future fighter force.24 The DPM

circulated around the Pentagon for coordination and, coming after Laird had seemingly

gutted OSD/SA, dismayed both the Air Force and the Navy because it threatened both the

F-14 and F-X programs. The Navy was especially unhappy because Lairdís reduction in

the number of F-14s left the Navy short of the number of new fighters required for its

carriers, but it wanted more F-14s, not a less capable lightweight fighter. The Navy took

the lead in the counterattack, and in an informal but devastating response circulated

around the Pentagon, George Spangenberg, the Director of the Naval Air Systems

Command's (NAVAIRSYSCOM) Evaluation Division, and Fred Gloeckler of the

Systems Evaluation Division, wrote a scathing analysis of Sprey's work. The Navy

engineers said the lightweight claimed for the VF-XX was unachievable and the

proposed thrust-to-weight ratio and wing loading could only be achieved by a larger

airplane. They added it was ìobviousî that Sprey was not an aeronautical engineer and

that:

[sprey's] basic concepts have been considered in detail by the Services

during the formative stages of the F-14 and F-15, have been reviewed by

DDR&E [Deputy Director of Research and Evaluation], and rejected in all

decisions to date...the reconsideration of the concept [VF-XX/F-XX] as a

viable alternative should have been turned down before submission to the

services...

 

In common with past papers by the same author, this study contains

many fallacious assumptions, half-truths, distortions, and erroneous

extrapolations. Unsubstantiated opinions are presented as facts. Any

rebuttals give the appearance of arguments against the rudimentary virtues

of simplicity, high performance, and low cost. 25 This response, while delivered with feeling, was factual and analytical and effectively

blunted Sprey's attempt to forward the DPM.

 

It also showed that Sprey was out of his class when confronted with knowledgeable aeronautical engineers, but it was a valuable lesson for Sprey, Boyd, Riccioni, and other Critics do not make arguments in front of experts. Their arguments would only achieve traction when they could present them to non-engineers unaware of the complexity and trade-offs of aircraft design. This meant they would have to move out of the Pentagon and fight on a different field.

 

 

So as an example lets take Wing loading - by itself it is no measure of how an aircraft turns especially when you get to 4 Gen designs like F-16. it is considered an old fashioned metric because it does not consider things like LEF/TEF, Tail or vortex lift - but mentioning Wing Loading makes him sound like an expert to the majority of his audience so all good!

 

 

We have just witnessed an F-35 apparently performing a fully controlled helicopter - something thought possible only on jets with TV. On the other hand Sprey in the video has upgraded F-35 maneuverability to F-104 instead of F-105 - of course an F-104 was limited to under around 15 degrees AoA due to its T Tail - again a ludicrous & retarded comparison! - he certainly has no access or any clue to actual performance.


Edited by Basher54321
  • Like 3
Link to comment
Share on other sites

I am going to chime in here.

 

I knew Chip back at the training commands when we were both JOs and I am amazed at his attitude with regards to visual fighting.

 

We trained extensively in dogfights both at Top Gun and at the squadron level. Sprey makes good points as to why and yet Berke tries to downplay it extensively, this is baffling to me.

 

I could care less about 5th generation, you can't rely on complex systems to be 100% hence the need to train for dogfighting etc. Sprey does a good job of making this case based on historical background.

 

The other aspect that Sprey brings to the table is his talking points regarding operational test and evaluation. I know what he is talking about because I worked at COMOPTEVFOR in the aviation department and saw first hand the weaknesses in our programs.

 

We need to keep the industrial complex accountable and honest with their products and our taxpayer money. This is what Sprey is getting at.

 

LtCol Berke is clearly biased towards the aircraft he flies and he has to be careful at what he says publicly but Sprey is making valid points.

 

Nobody lost this debate, this was a good discussion that shed more light on what is going on with the F35.

 

I suggest you keep an open mind when listening to this discussion.

 

BTW I had two careers in the navy one as an F18 pilot and one as an acquisition officer so I know both sides of this argument very well.

 

The one aspect that Berke may not be consciously aware is the "golden crew" phenomenon where either a Developmental or operational test group becomes so comfortable with the equipment they are currently testing that they lose sight of the true state of the product. Bottom line: humans have faults at all levels and unfortunately they can compound the defects in development without knowing it.


Edited by neofightr
Link to comment
Share on other sites

As a follow on point since it wasn't mentioned in the discussion.

 

Drones, they are the future and they will make the F35,F22 (any 5th gen) obsolete very quickly.

 

Drones can (pros):

-pull inhuman Gs thereby surpassing all manned a/c in maneuverability and most missiles.

-process and manage targets faster than humans.

-be virtually undetectable thanks to small radar footprint

-require no training just software updates (performance constantly being improved through software updates unlike humans that have a natural limit).

-come in at a fraction of the cost once development is refined and finalized.

-be completely autonomous thereby negating jamming since no comms are required.

-be a massive force multiplier due to size and capability. Imagine the USS Ford with 200 drones etc.

-put no human at risk for the mission.

-not subject to piloting error

 

Cons

-subject to EMP effects (just like manned a/c)

-subject to system failures (just like manned)

-subject to range limitations due to small size requirements (air to air refueling will be a no

brainer and very easy obviously)

-subject to less ordnance due to size but made up for in numbers.

-subject to accountability/legal issues when mistakes are made for mission target assignments etc.

 

Now look at the pros and cons and tell me, what politician in their right mind would not pick a drone x 20 over a single F35. All the military top brass (non-pilots of course) would see it as a no-brainer.

 

This drone topic is the elephant in the room when it comes to fighter pilots because the writing is on the wall, there will soon be (15 years max) no need for pilots other than a small elite seal team like force for those special black ops missions requiring real-time critical thinking.

 

Both the F22 and F35 programs are making the strongest case for drones due to their operating budgets and complexity issues going sky high since their initial proposals.

I am surprised Sprey doesn't talk about it.

 

Even back in my flying days 17 years ago I knew drones were coming and was telling my buddies just watch, drones are on the way and will take over once CPU power is sufficient and the need for comms go away.

Link to comment
Share on other sites

If a manned aircraft suffers a software failure, the mission is scrubbed, at worst an aircraft is lost. If an autonomous weapon system suffers a software failure, well now you have a very unpredictable piece of equipment with heavy weapons flying around and if it's fully autonomous, no way to call it back without destroying it. Add in software attacks, designed to create failure states that could be catastrophic, I find people claiming that fully autonomous weapons are in the near future to be overly optimistic about the technology.

 

'Manned' Drones are subject to communications lag, reduced situational awareness and most dangerously, communications override, the story of insurgents being able to download the video link from drones being just the tip of the iceberg. None of these features make them suitable for air superiority in a modern contested environment, and against an enemy proficient in cyber warfare, drones can be more harm that good if not properly developed. I do not view the F-35 as being at all threatened by autonomous weapons systems simply because the technology to safely provide them to a level acceptable to the country and at a cost acceptable to the government is not there yet.

  • Like 1
Link to comment
Share on other sites

Ah now I see why gunfights are downplayed with regards to the F35

 

This dated article gives insight to the mindset of the no guns philosophy. History repeats itself yet again.

 

http://www.thedailybeast.com/new-us-stealth-jet-cant-fire-its-gun-until-2019?via=googleplus

 

"The lack of a gun is not likely to be a major problem for close-in air-to-air dogfights against other jets. Part of the problem is that the F-35—which is less maneuverable than contemporary enemy fighters like the Russian Sukhoi Su-30 Flanker—is not likely to survive such a close-in skirmish. “The jet can’t really turn anyway, so that is a bit of a moot point,” said one Air Force fighter pilot.

“The JSF is so heavy, it won’t accelerate fast enough to get back up to fighting speed,” said another Air Force fighter pilot. “Bottom line is that it will only be a BVR [beyond visual range] airplane.”

 

But then again these could be F16 and F15 pilots poopooing the plane.

 

:noexpression:

Link to comment
Share on other sites

If a manned aircraft suffers a software failure, the mission is scrubbed, at worst an aircraft is lost. If an autonomous weapon system suffers a software failure, well now you have a very unpredictable piece of equipment with heavy weapons flying around and if it's fully autonomous, no way to call it back without destroying it. Add in software attacks, designed to create failure states that could be catastrophic, I find people claiming that fully autonomous weapons are in the near future to be overly optimistic about the technology.

 

'Manned' Drones are subject to communications lag, reduced situational awareness and most dangerously, communications override, the story of insurgents being able to download the video link from drones being just the tip of the iceberg. None of these features make them suitable for air superiority in a modern contested environment, and against an enemy proficient in cyber warfare, drones can be more harm that good if not properly developed. I do not view the F-35 as being at all threatened by autonomous weapons systems simply because the technology to safely provide them to a level acceptable to the country and at a cost acceptable to the government is not there yet.

 

I promise you these software failure scenarios have been thought through. If such a failure would occur it would self-destruct just like rockets automatically do with post-launch failures hence the incentive to have robust redundant software checks..

 

BTW drones have been successfully used in the military for over 20 years so it already has a track record that is only getting better.

 

The beauty of drones is if it suffered an engine failure over enemy land it would self-destruct to deny the enemy and of course no pilot to imprison. If it engine failed over friendly territory it would deploy a chute (depending on size) to be recoverable.

 

Autonomous means no video transmissions that could be intercepted. It would be unhackable because there would be no way for it to receive hacking instructions when in emcon mode.

 

You need to think outside the box. Drones aren't going to be controlled like the old-school RC types. They will be programmed to go silent once in enemy territory then come back up after feet wet. The only way they can be stopped is with another drone/advanced missile/emp pulse.

 

We are on the verge of mass autonomous cars on human populated city streets and yet F/A mil drones are a stretch? Give me a break. :doh:

 

Oh and by the way if you think because there is no talk about the latest on military drones in the gov and mil channels that it somehow is a dead topic, heh think again. :music_whistling:

 


Edited by neofightr
Link to comment
Share on other sites

Autonomous means no video transmissions that could be intercepted. It would be unhackable because there would be no way for it to receive hacking instructions when in emcon mode.

 

So, you believe the kill decision can be coded into a piece of hardware, and said piece of hardware can then be trusted to perform its tasks autonomously? With no way whatsoever to recall it?

 

While I'm sure many people work towards this scenario, I fully hope it never, ever comes to that. Leaving aside a myriad of Science Fiction books and movies that discuss the morals, ethics, and fallibility of such devices (*), I believe I know enough about software engineering to consider this idea entirely and utterly ridiculous, if it wasn't so extremely serious.

 

Coming back on topic, I only listened to part of the discussion, but it was quite interesting, thanks for sharing! :thumbup:

 

(*) There are a lot of bad things to be said about this episode, but it does a very nice job of highlighting the problem with drones gone too far: Star Trek: The Next Generation 1x21 "The Arsenal of Freedom". Well worth watching in the next TNG re-run. On the upside, if we encounter killer drones trying to hunt down every last one of us, we could just try to purchase them. ;)

Link to comment
Share on other sites

So, you believe the kill decision can be coded into a piece of hardware, and said piece of hardware can then be trusted to perform its tasks autonomously? With no way whatsoever to recall it?

 

While I'm sure many people work towards this scenario, I fully hope it never, ever comes to that. Leaving aside a myriad of Science Fiction books and movies that discuss the morals, ethics, and fallibility of such devices (*), I believe I know enough about software engineering to consider this idea entirely and utterly ridiculous, if it wasn't so extremely serious.

 

Coming back on topic, I only listened to part of the discussion, but it was quite interesting, thanks for sharing! :thumbup:

 

(*) There are a lot of bad things to be said about this episode, but it does a very nice job of highlighting the problem with drones gone too far: Star Trek: The Next Generation 1x21 "The Arsenal of Freedom". Well worth watching in the next TNG re-run. On the upside, if we encounter killer drones trying to hunt down every last one of us, we could just try to purchase them. ;)

 

What do you think happens in an F18 when the pilot pickles a jdam that takes several minutes to impact?, you think there is a way to recall that? Or better yet a cruise missile (there might be a self destruct for that one) launched from hundreds of miles away.

 

Or how about an antiship missile fired at a target over the horizon that will take several minutes to hit.

 

In an air to air mission just like it's done today, the pilots are cleared weapons free and given a vector; are told if they are a bandit or bogey then prosecutes the targets. If bogey they ID to verify bandit. A drone with the right sensors could easily do this scenario.

 

The drone in this case is really just a smart missile ordered from the command/controller plane to fire smaller missiles then return to be rearmed etc.

 

What we are talking about are windows of time from decision to execution nothing more. In all those scenarios experienced commanders have given the order to execute to an aircrew. No different for a drone.

 

A drone could easily be standing by airborne/on station given the execute order while feet wet then goes in emcon and in full afterburner, attacks a ground target in say 10 minutes then RTB 15 minutes later.

 

Another scenario, Close air support, drone is on station fully autonomous but programmed to wait for a coded marker to be fired at a target from a FAC (or ground agent), once the FAC designates the target with the marker, drone prosecutes the target, if no marker detected in target area after certain a time window, drone RTBs. This is all done autonomously of course.

 

So many ways to use a drone effectively and responsibly, it's really up to the imagination.

 

The biggest sticking point right now is who assumes responsibility if the drone attacks a friendly due to input error. Is it the programmer? the commander giving the order? Someone will have to take the blame and putting the drone on trial won't work.

 

BTW these scenarios are all hypotheticals from my imagination, I do not have any insight to actual capabilities in today's drone programs.:book:

 

I am just using educated guesses from my experience.

 

BTW the original star trek had the best episode on the dangers of autonomy (the M5 episode).


Edited by neofightr
Link to comment
Share on other sites

What do you think happens in an F18 when the pilot pickles a jdam [...]

 

Glad we got that sorted out. :D

 

(But see below for a back-reference to this point)

 

What we are talking about are windows of time from decision to execution nothing more. In all those scenarios experienced commanders have given the order to execute to an aircrew. No different for a drone.

 

True, and not true.

 

When given an order, a soldier has the ability to question that order, to consider whether or not the order falls within the ROE, to evaluate whether or not the person giving the order actually has the full picture... in short, the soldier has a conscience. I do not believe we can program this into a piece of hard- and software.

 

Another scenario, Close air support, drone is on station fully autonomous but programmed to wait for a coded marker to be fired at a target from a FAC (or ground agent), once the FAC designates the target with the marker, drone prosecutes the target, if no marker detected in target area after certain a time window, drone RTBs. This is all done autonomously of course.

 

If the drone's kill decision is based on an outside marker, it's not really autonomous in its decision making process. :smartass:

 

What's more, the drone's requirement to scan for that marker (besides using all other types of sensors in order to maneuver and to acquire and track targets) means that it will scan for outside input. If you take a very superficial look at the ways that are used nowadays to hack computers by manipulating just about any kind of input, I'm sure you would realize how insane it sounds to leave the decision making process of using lethal force to a computer.

 

The biggest sticking point right now is who assumes responsibility if the drone attacks a friendly due to input error. Is it the programmer? the commander giving the order? Someone will have to take the blame and putting the drone on trial won't work.

 

That would be one aspect to consider, but I'd much rather we discuss how to avoid it altogether than discuss whom to blame if (when) it happens.

 

-----------------------------------

 

You do bring up some good and important points, for instance the "smart" weapons and what happens after release. I'm sure there are already too many soldiers who wish they could undo that button press when the situation in the target area changed during those 10, 20, 30 seconds until impact. But that's not what I mean.

 

What I mean is the actual decision making process to employ lethal force. While politicians and generals might set strategic goals, IMHO the act of actually firing a weapon should never be left to a device.

 

To put it differently: Say you wanted to protect your property, and you would consider lethal force as a valid means to achieve that goal. Would you really, really trust fully autonomous weapons to do the job? I know what my answer to that question is, and will be.

 

BTW the original star trek had the best episode on the dangers of autonomy (the M5 episode).

 

Great reference, thanks! I definitely need to watch the classic Star Trek again. :smartass:

Link to comment
Share on other sites

Talking about drones, those things crash so often it feels like the USAF during the 1950s. So many Accident Investigation Boards (AIB) reports, there are 5 this year alone.

http://www.airforcemag.com/AircraftAccidentReports/Pages/default.aspx?Year=2017

I never work on them, but talk to crew chief and they say they loose even more, but they can recover them, so they don't have to do class A reports. I was told if they loose communication, some are program to land automatically and not always choose the correct airfield or country to land. You all remember what happen to one LM RQ-170?


Edited by mvsgas

To whom it may concern,

I am an idiot, unfortunately for the world, I have a internet connection and a fondness for beer....apologies for that.

Thank you for you patience.

 

 

Many people don't want the truth, they want constant reassurance that whatever misconception/fallacies they believe in are true..

Link to comment
Share on other sites

I could care less about 5th generation, you can't rely on complex systems to be 100% hence the need to train for dogfighting etc. Sprey does a good job of making this case based on historical background.

 

You should care very much about 5th gen. Technology makes a fighter. While the generational thing is partially marketing, it does point out some of the F-35's strengths. These things will play a role in combat and will likely help the F-35 come on top of engagements with currently existing enemies. Pilot training matter, but it's something else entirely, and it's not the single deciding factor in combat.

 

LtCol Berke is clearly biased towards the aircraft he flies and he has to be careful at what he says publicly but Sprey is making valid points.

A little. I feel like most of Sprey's points were canned lines really. Some points that he did make weren't really fleshed out from what I saw.

 

As a follow on point since it wasn't mentioned in the discussion.

 

Drones, they are the future and they will make the F35,F22 (any 5th gen) obsolete very quickly.

 

Drones can (pros):

-pull inhuman Gs thereby surpassing all manned a/c in maneuverability and most missiles.

-process and manage targets faster than humans.

-be virtually undetectable thanks to small radar footprint

-require no training just software updates (performance constantly being improved through software updates unlike humans that have a natural limit).

-come in at a fraction of the cost once development is refined and finalized.

-be completely autonomous thereby negating jamming since no comms are required.

-be a massive force multiplier due to size and capability. Imagine the USS Ford with 200 drones etc.

-put no human at risk for the mission.

-not subject to piloting error

 

Increased g limits will come at the cost of weight and unit price. They're not outmaneuvering missiles.

 

They may process targets faster than humans eventually

 

I don't think they'll be much different from piloted aircraft in terms of RCS. I don't forsee the micro drone swarm taking over. To me it seems like regular aircraft will just take out the pilot. They'll be about the same cost and perform about the same.

 

The drones may or may not require training - they could use evolutionary algorithms to improve them. Whatever updates they receive will need to be designed and tested though. Commanders will also need to figure out how to best use the drones unless military commands becomes AI as well.

 

I don't see why they would be cheaper.

 

Jamming will continue to be an issue. Drones will need to communicate with each other and find targets in a dense EW environment.

 

200 drones doesn't mean anything without knowing the capability of those aircraft. If they're all short legged and carry a tiny payload then they could very well be inferior to regular piloted planes.

 

No pilot risk, right

 

No human error in operation, but possible human error in programming and planning.

 

Cons

-subject to EMP effects (just like manned a/c)

-subject to system failures (just like manned)

-subject to range limitations due to small size requirements (air to air refueling will be a no

brainer and very easy obviously)

-subject to less ordnance due to size but made up for in numbers.

-subject to accountability/legal issues when mistakes are made for mission target assignments etc.

 

You can't really bring tankers into heated airspace. The range limit is a problem. Smaller size may also limit payload and avionics capabilities which could decrease survivability.

Now look at the pros and cons and tell me, what politician in their right mind would not pick a drone x 20 over a single F35. All the military top brass (non-pilots of course) would see it as a no-brainer.

 

20 drones = 1 F-35 seems to be pulled out of nowhere. And again, without knowing how these drones perform, getting 20 of them might be worthless for some missions.

 

Ah now I see why gunfights are downplayed with regards to the F35

 

This dated article gives insight to the mindset of the no guns philosophy. History repeats itself yet again.

 

http://www.thedailybeast.com/new-us-stealth-jet-cant-fire-its-gun-until-2019?via=googleplus

 

"The lack of a gun is not likely to be a major problem for close-in air-to-air dogfights against other jets. Part of the problem is that the F-35—which is less maneuverable than contemporary enemy fighters like the Russian Sukhoi Su-30 Flanker—is not likely to survive such a close-in skirmish. “The jet can’t really turn anyway, so that is a bit of a moot point,” said one Air Force fighter pilot.

“The JSF is so heavy, it won’t accelerate fast enough to get back up to fighting speed,” said another Air Force fighter pilot. “Bottom line is that it will only be a BVR [beyond visual range] airplane.”

 

But then again these could be F16 and F15 pilots poopooing the plane.

 

:noexpression:

These statement's don't seem very accurate. Like the whole wing loading thing, people point it out but don't actually make a case. The F-35 is about as heavy as the F-15. It's certainly lighter than a Flanker. Why is weight an issue?

  • Like 1

Awaiting: DCS F-15C

Win 10 i5-9600KF 4.6 GHz 64 GB RAM RTX2080Ti 11GB -- Win 7 64 i5-6600K 3.6 GHz 32 GB RAM GTX970 4GB -- A-10C, F-5E, Su-27, F-15C, F-14B, F-16C missions in User Files

 

Link to comment
Share on other sites

I am just using educated guesses from my experience.

 

Sounds like you want a fully autonomous thinking AI robot - not sure Drone is the best term considering they have already been used in conflicts for decades.

 

Problem is that it still needs similar sensors a manned jet has (optical/radar) to be of the same level of autonomy - unless it can dogfight totally blind - cant be totally reliant on off board information.

 

Conveniently some of these systems like EODAS have been developed for the F-35 and anything like that going into your robots would almost certainly be derived from that.

 

Your radar sensor might be a bit small after you have crammed in everything with the fuel for range to meet requirements. You also need a stealth tanker if your drones need a lot of refueling to go anywhere.

 

The structural weight will also be relatively high if you want a 6000 hr 30G airframe for example - which outside of a gunfight still wouldn't be enough Vs a laser or a redesigned missile (which have 0 hour requirements )

 

The F-35 is just another stepping stone in technology development and hopefully a lot of that can be used as a basis for future systems - if not however the cost of even higher technology will be even higher - guaranteed.

 

 

I certainly await the future debate where Sprey is vehemently opposed to these new high cost /high technology blights on humanity. :thumbup:

  • Like 1
Link to comment
Share on other sites

Glad we got that sorted out. :D

 

(But see below for a back-reference to this point)

 

 

 

True, and not true.

 

When given an order, a soldier has the ability to question that order, to consider whether or not the order falls within the ROE, to evaluate whether or not the person giving the order actually has the full picture... in short, the soldier has a conscience. I do not believe we can program this into a piece of hard- and software.

 

 

 

If the drone's kill decision is based on an outside marker, it's not really autonomous in its decision making process. :smartass:

 

What's more, the drone's requirement to scan for that marker (besides using all other types of sensors in order to maneuver and to acquire and track targets) means that it will scan for outside input. If you take a very superficial look at the ways that are used nowadays to hack computers by manipulating just about any kind of input, I'm sure you would realize how insane it sounds to leave the decision making process of using lethal force to a computer.

 

 

 

That would be one aspect to consider, but I'd much rather we discuss how to avoid it altogether than discuss whom to blame if (when) it happens.

 

-----------------------------------

 

You do bring up some good and important points, for instance the "smart" weapons and what happens after release. I'm sure there are already too many soldiers who wish they could undo that button press when the situation in the target area changed during those 10, 20, 30 seconds until impact. But that's not what I mean.

 

What I mean is the actual decision making process to employ lethal force. While politicians and generals might set strategic goals, IMHO the act of actually firing a weapon should never be left to a device.

 

To put it differently: Say you wanted to protect your property, and you would consider lethal force as a valid means to achieve that goal. Would you really, really trust fully autonomous weapons to do the job? I know what my answer to that question is, and will be.

 

 

 

Great reference, thanks! I definitely need to watch the classic Star Trek again. :smartass:

 

I never said the drone would make the decision to use lethal force, that decision lies with the commander that gives the order to the drone to engage. The commander is pulling the trigger and the weapon to that trigger is the drone. You are not using the proper reference here.

 

The fact that the drone is on station ready to attack based on a given marker makes it autonomous in it's execution of the attack. The order to attack is given by the one firing the marker. Again it's all about your perspective. BTW A marker could be encrypted and scrambled on an hourly or daily basis to avoid spoofing/hacking.

 

ROE could easily be programmed into a drone since it's a set of logical decisions but the drone would not prosecute/execute the mission until the trigger is pulled by a human then it could do a double-check on ROE as a precautionary step before Or the ROE programmed onboard could prevent it from (never to initiate) executing a given order if the designers went that route.

 

So many options could be used and none of it would paint the sci fi doomsday picture so many people fear.

 

"When given an order, a soldier has the ability to question that order, to consider whether or not the order falls within the ROE, to evaluate whether or not the person giving the order actually has the full picture... in short, the soldier has a conscience. I do not believe we can "program this into a piece of hard- and software.

 

The whole reason that a commander of a battlefield makes the big bucks and has absolute authority is because an effective fighting unit cannot have soldiers questioning orders, it simply does not work that way. There is no time for a soldier to sit there and evaluate a given order and a soldier will not have the full picture during the fog of war of battle. They simply are not high enough up the chain to see the whole picture. If a given order is illegal then the commander will be punished in due time and process.

 

This is different from having the local tactical picture that is a must have for the soldier and would keep a soldier alive and effective at the mission. Not the same thing.

 

"To put it differently: Say you wanted to protect your property, and you would consider lethal force as a valid means to achieve that goal. Would you really, really trust fully autonomous weapons to do the job? I know what my answer to that question is, and will be."

 

This is the wrong analogy, you want to protect your property and you (or your drone) notice a lethal intruder (has a weapon) is on it, you hit the button that enables your drone to attack and goes after the designated intruder. Once the drone has incapacitated the intruder it immediately goes back into safe mode, etc etc.

 

If the drone attacks an innocent (police officer checking out your yard, unannounced neighbor) it never initiated it on it's own, it simply can't because it has to be ordered by you to go after a target. So in essence the drone is simply a smart weapon nothing more. It's not the terminator.

 

Notice I never paint a picture where the drone is fully on it's own to decide to attack and not to attack without human authority. The drone is simply another form of fire and forget. The fire being the operative word.


Edited by neofightr
Link to comment
Share on other sites

Talking about drones, those things crash so often it feels like the USAF during the 1950s. So many Accident Investigation Boards (AIB) reports, there are 5 this year alone.

http://www.airforcemag.com/AircraftAccidentReports/Pages/default.aspx?Year=2017

I never work on them, but talk to crew chief and they say they loose even more, but they can recover them, so they don't have to do class A reports. I was told if they loose communication, some are program to land automatically and not always choose the correct airfield or country to land. You all remember what happen to one LM RQ-170?

 

I am not surprised by this, as you can imagine the technology is maturing and evolving constantly and boundaries needs to be pushed sometimes at great cost.

 

I have seen the dangers to cutting edge tech like this first hand, the best example is the automated landing system for the hornet. In the nineties I saw it almost kill my CO when he had it attempt to land his plane on the carrier in rough seas.

 

I am pretty sure the software is much better and more reliable now.

 

We read about tragic deaths involving cars like Tesla when someone decides to completely let the car drive and ignore the warnings in doing so. The tech is getting there and keep in mind this is for a very dynamic environment like a busy street with unpredictable humans everywhere.

 

As chaotic as battlefields are they still are no where near as chaotic as a busy highway intersection with lots of cars, motorcycles, pedestrians etc.

 

So if we are so close in solving this scenario (as google, tesla and apple think we are) then imagine how close we are to solving the battlefield problem.

 

Oh and by the way, all this talk about trusting autonomous drones with lethality and yet we are doing almost the same thing right now, with 4500lb rolling ground missiles (i.e. autonomous cars) everyday on the roads of this country and others.


Edited by neofightr
Link to comment
Share on other sites

In response to statements saying I want a fully autonomous drone making it's own decisions to engage or not to engage without human control, I do not.

 

I think this mindset comes from the idea that a human F/A pilot somehow has the authority to engage or not to engage anyone it sees fit without prior approval.

 

Those in the know, realize this is not the case. Typically humans are ordered to attack a specific pre-planned target, or are given real-time designations by controllers with authority from above. As I already stated this can be easily done by a drone just as it is done by a pilot.

 

In the case where humans are ordered to an area and given the weapons-free mandate to attack any target of opportunity bad things can and do happen (collateral damage and targeting of innocents).

 

Blue on blue has been a problem for modern warfare from the start and it would actually improve with AI because we could remove human emotion/error from the equation.

 

What do I mean by this? There is famous footage from the first gulf war showing attack helo pilots aggressively attacking ground targets at night only to discover to their horror that they were friendly.

 

You can hear in the video they are eager to get their first kill and even though they are verifying with other sources before engaging they are ready to jump at it as soon as they get the all clear signal.

 

So how does AI reduce this likelihood? Well for starters AI will not mistake a target because unlike the pilots using visual confirmations at night, AI would have access to the whole spectrum of sensors in real-time and processing the information dramatically faster so it would most likely not attack based on ROE even when given an all clear signal.

 

It would have no problem determining what type of enemy vehicle it is looking at and based on programming determine human weapons at a distance which could help in it's logic on positive ID of the enemy.

 

Furthermore AI drones would not be eager to get their first kill nor would they be excited in the heat of battle (subject to increase error) because emotion is removed from the equation.

 

At worst the drone would behave as badly as a human would but at best we would have the smartest and safest weapon out there on the front lines.

 

All these drone kills of innocent humans we have heard about in the last 15 years can be attributed to direct pilot control or mission planner input error. It is never the drones fault.

 

**At no point would a drone initiate an attack without human orders to do so** Now a drone could easily refuse to attack based on ROE or lack of positive ID but it would not initiate an attack.

 

Here is an easy to understand scenario.

Drone is ordered into a hostile area with contacts but not given an order to attack, drone flies into the area IDs the first contact and verifies friendly, obviously does not attack because it's friendly, next contact it intercepts is positively ID (and marks location of) as enemy, drone does not attack because it never got the order to attack and evades, third contact, drone can't ID due to atmospheric interference and range etc etc so evades.

 

Drone returns to friendly territory provides updates to the controllers.

Drone is then ordered back into the hostile area with orders to attack *known* enemy targets in the area. Obviously drone ignores known friendly, drone attacks previously marked target, drone attempts to intercept previously unknown target and now IDs as enemy but can't fire on it since it wasn't previously known, evades and returns to friendly area for new orders.

 

This is an example where the drone never initiates an attack without approval from a human authority, but when ordered does the mission and does it superior to what a human could do.

 

Could the IDs be erroneous? Sure but absolutely no worse than a human would do given the same circumstances and I would argue would always be better at it because of no stress, no excitement and no human error in the process.


Edited by neofightr
Link to comment
Share on other sites

I promise you these software failure scenarios have been thought through. If such a failure would occur it would self-destruct just like rockets automatically do with post-launch failures hence the incentive to have robust redundant software checks.

 

Oh I can assure you that these software failure haven't been totally thought through nor a solution to them fully developed. The reason I can assure of this is because you're talking about a signifigant hurdle in AI development, how to handle the way an AI thinks to avoid it doing things we don't want it to do. There are large portions of the scientific community still dealing with this problem and a solution still hasn't been reached.

 

Furthermore, the simple "Error Detected, kill self" idea is hogwash. First of all the software would need to understand that it failed and have a state within it allowing a self termination. Allowing self termination leaves you vulnerable to enemy computer intrusions aimed at getting your drones to kill themselves. A rocket does not have AI, it has a simple command line "If I am x percentage off optimal flight path, self destruct". Combat by its very nature rends such simple instructions worthless.

 

BTW drones have been successfully used in the military for over 20 years so it already has a track record that is only getting better.

 

Manned operated drones are not autonomous, which is what we're talking about.

 

The beauty of drones is if it suffered an engine failure over enemy land it would self-destruct to deny the enemy and of course no pilot to imprison. If it engine failed over friendly territory it would deploy a chute (depending on size) to be recoverable.

 

A combat capable drone would be larger than an MQ-11 and you know it. Likely similar in size to a modern fighter aircraft, and in that circumstance, a simple pop chute to recover is not going to happen. Furthermore, what is enemy territory? How does it know where it is? How does it recognize self failure? In an autonomous drone, with no uplink, these are significant hurdles, and if it includes an uplink, that uplink is a point of attack for a modern cyber capable enemy.

 

Autonomous means no video transmissions that could be intercepted. It would be unhackable because there would be no way for it to receive hacking instructions when in emcon mode.

 

So now we're back to software failure. The drone skips over a line of code and now is unpredictable with heavy weapons.

 

You need to think outside the box. Drones aren't going to be controlled like the old-school RC types. They will be programmed to go silent once in enemy territory then come back up after feet wet. The only way they can be stopped is with another drone/advanced missile/emp pulse.

 

I'm afraid I disagree, you're not thinking outside the box. You expect these things to simply work, but the problem is you haven't thought sufficiently about how. There are entire schools of research dedicated to the AI problem and how to resolve it, and those people with engineering and computer science PhDs haven't come up with a solution yet.

 

We are on the verge of mass autonomous cars on human populated city streets and yet F/A mil drones are a stretch? Give me a break. :doh:

 

One is a car that drives on the road. The other has to fly itself around the planet, make complex descisions in air combat, and you've given it heavy weapons to attack targets in remote areas of the planet. Autonomous Cars are still decades away from being safe and reliable, and even modern cars are vulnerable to cyber attack. You're not thinking this through.

 

Oh and by the way if you think because there is no talk about the latest on military drones in the gov and mil channels that it somehow is a dead topic, heh think again. :music_whistling:

 

Of course it's not a dead topic, the government would kill to get their hands on a fully capable autonomous drone, but we are many decades away from being able to field one on the level you're talking to replace man operated systems.

Link to comment
Share on other sites

You should care very much about 5th gen. Technology makes a fighter. While the generational thing is partially marketing, it does point out some of the F-35's strengths. These things will play a role in combat and will likely help the F-35 come on top of engagements with currently existing enemies. Pilot training matter, but it's something else entirely, and it's not the single deciding factor in combat.

 

 

A little. I feel like most of Sprey's points were canned lines really. Some points that he did make weren't really fleshed out from what I saw.

 

 

Increased g limits will come at the cost of weight and unit price. They're not outmaneuvering missiles.

 

They may process targets faster than humans eventually

 

I don't think they'll be much different from piloted aircraft in terms of RCS. I don't forsee the micro drone swarm taking over. To me it seems like regular aircraft will just take out the pilot. They'll be about the same cost and perform about the same.

 

The drones may or may not require training - they could use evolutionary algorithms to improve them. Whatever updates they receive will need to be designed and tested though. Commanders will also need to figure out how to best use the drones unless military commands becomes AI as well.

 

I don't see why they would be cheaper.

 

Jamming will continue to be an issue. Drones will need to communicate with each other and find targets in a dense EW environment.

 

200 drones doesn't mean anything without knowing the capability of those aircraft. If they're all short legged and carry a tiny payload then they could very well be inferior to regular piloted planes.

 

No pilot risk, right

 

No human error in operation, but possible human error in programming and planning.

 

 

 

You can't really bring tankers into heated airspace. The range limit is a problem. Smaller size may also limit payload and avionics capabilities which could decrease survivability.

 

 

20 drones = 1 F-35 seems to be pulled out of nowhere. And again, without knowing how these drones perform, getting 20 of them might be worthless for some missions.

 

 

These statement's don't seem very accurate. Like the whole wing loading thing, people point it out but don't actually make a case. The F-35 is about as heavy as the F-15. It's certainly lighter than a Flanker. Why is weight an issue?

 

Increased g limits does not equal more weight, if anything you require less weight because materials are getting stronger and lighter at the same time. This is the reason why missiles can pull many times more gs than aircraft. Smaller and lighter means more Gs.

 

A drone could easily outmaneuver a missile because it has larger control surfaces which would give it better turn rates etc.

 

A drone would be much smaller and cheaper with the near same capabilities as today's aircraft.

Why? No human systems onboard(i.e. no consoles, no ejection seats, etc).

 

The smaller the drone gets in design the lesser the weight, the lesser the fuel needs, the better the performance.

 

I never said the tankers would be in hot areas, that would be silly. The smaller the drone, the lesser the drag and weight and the better the range will be.

 

Just like the F35 :megalol:, all a drone would need is a couple of missiles and a single bomb to be as effective. This does not make a big drone.

 

Training would be a game changer simply put, once new tactics and algorithms are devised based on past experience in the battlefield, all it would take is a simple download on a private network and Bam! the entire fleet of drones are updated with the latest tactics and ROEs.

 

It would literally take months if not years for new doctrines to be trained into human pilots. :book:

 

Jamming would not continue to be an issue. First off no need for voice comms, second any comms between drones could be done at close range either through optic transmissions or high frequency short range data bursts. Both would be nearly impervious to jamming.

 

With regards to sensors, instead of relying on a optically sensored pilots with laggy interfaces (i.e. hands) to aircraft sensor suites, you would have AI directly connected to a full spectrum of sensors. When I mean full-spectrum I mean all frequencies from sound to light and beyond.

 

Impervious to jamming because that's a whole lot of frequencies to jam.

 

They would be cheaper because scale of industry and module-based designs. Both of which applies to today's aircraft but now we are dealing with a more smaller less cumbersome package which would exponentially decrease cost in bulk.

 

Just look at the cost of today's desktop PCs compared to 20 years ago and you can see where the trend would be.

 

"These statement's don't seem very accurate. Like the whole wing loading thing, people point it out but don't actually make a case. The F-35 is about as heavy as the F-15. It's certainly lighter than a Flanker. Why is weight an issue?"

 

The statements feel very accurate to me based on what I have seen publicly. The problem with the F35 being about as heavy as an F15 is just look at their size footprints and their control surfaces. Their is footage of the tiny F35 flying form on an F22.

 

As the competitors (Boeing) have pointed out, the F35 has sacrificed performance for stealth.

To me the F35 looks like a dense rock of a plane compared to F16/22 or F18/15.

 

This reminds me of the F-117 all over again (but not as flagrant), funny how no one talks about that plane any more.

 

So it doesn't surprise me that the Air Force pilots had nothing good to say about the plane a few years ago. And don't think for a second that they (LM) just waved a magic wand and fixed all those concerns with today's F35. :megalol:

 

By the way, all my educated guesses on drones and their future are based on my degree in Computer Science and my advanced degree in Systems Engineering and a whole ton of tactical experience.

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...