Jump to content

Neural network fun with DCS


Recommended Posts

Hello,

 

I have invested some free time to learn how a neural network works, and wanted to see could I make a neural network learn to fly a plane in DCS.

 

One idea was to sit and fly around and record the data, then let the neural network learn on it and try to fly the same way i do. However i decided to play around with a genetic algorithms instead and let the network learn how to fly by itself.

 

I have developed a small simulator in Unity with built-in physics to behave like A4 and to be able to train multiple aircraft at the same time without waiting for loading screen each time one collide with a ground. Once I was satisfied with a state of the training, the network was connected to DCS with UDP port and export.lua script.

 

The result looks like this with couple of waypoints set around the mountain

 

 

A trained network test in unity sim

 

 

And a training phase, does look like some multiplayer servers, fun was however to see first generations trying to roll like a missile to maintain direction and slowly evolving.

 

 

For me it was a fun project and I hope to see one day a little bit more "real" AI in DCS.

  • Like 1

AMD FX 8350 4.0Ghz, 16gb DDR3, MSI R9 390, MS FF2 & CH Throttle PRO, track IR 4 & Lenovo Explorer VMR

Link to comment
Share on other sites

Ever thought of using your neural network to train new pilots?

BlackeyCole 20years usaf

XP-11. Dcs 2.5OB

Acer predator laptop/ i7 7720, 2.4ghz, 32 gb ddr4 ram, 500gb ssd,1tb hdd,nvidia 1080 8gb vram

 

 

New FlightSim Blog at https://blackeysblog.wordpress.com. Go visit it and leave me feedback and or comments so I can make it better. A new post every Friday.

Link to comment
Share on other sites

That is indeed interesting. You should talk to ED, that would be a dream to have in DCS!

Phanteks EvolvX / Win 11 / i9 12900K / MSI Z690 Carbon / MSI Suprim RTX 3090 / 64GB G.Skill Trident Z  DDR5-6000 / 1TB PCIe 4.0 NVMe SSD / 2TB PCIe 3.0 NVMe SSD / 2TB SATA SSD / 1TB SATA SSD / Alphacool Eisbaer Aurora Pro 360 / beQuiet StraightPower 1200W

RSEAT S1 / VPC T50 CM2 + 300mm extension + Realsimulator F18 CGRH / VPC WarBRD + TM Warthog grip / WinWing F/A-18 Super Taurus / 4x TM Cougar MFD / TM TPR / HP Reverb G2

Link to comment
Share on other sites

Thanks for the constructive feedbacks. I think too that DCS AI need some rework too, especially the super speed AI can fly and climb, or the part when he can't lose the sight of me.

 

One fun part with this AI project was sitting in back of tomcat letting it fly preset course in single player while i was learning how to operate the avionics in the back seat.

 

Also i am fairly new to export.lua, didn't know that i can get a information about enemy plane(s), i have tried combat scenario but only within my sim.

AMD FX 8350 4.0Ghz, 16gb DDR3, MSI R9 390, MS FF2 & CH Throttle PRO, track IR 4 & Lenovo Explorer VMR

Link to comment
Share on other sites

Wow your is awesome. It would amazing if someone can create an AI to operate ground units for MP servers. Occupy a tac Commander slot and control units' behavior. Such as hide them from cas attacks, turn off and of radar to evade SEAD strikes, stuff like that.

 

I'm not a programmer, so I don't understand how it all works, but just wandering how hard would it be to make something like that?

Link to comment
Share on other sites

Looking forward to the inevitable humans-vs-Skynet multiplayer server.


Edited by DrRed

Northrop F-5E TIGER II

Intel i7-9700k @ 5.0 GHz | GTX 3070 | 32 GB | HTC VIVE PRO 2
Virpil T50 CM2 base + CM2 grip | CM2 Throttle | VKB T-Rudder Mk.IV
Link to comment
Share on other sites

I specialize in neural networks and teach them to undergrads. I have engineered and trained several networks including a few that fly on multiplayer (with other online players having no idea that they are fighting AI). People typically are more gentlemanly on DCS servers but when I deployed initial trials on IL2 servers, .... oh my, the cussing that would ensue when people got shot down.

 

If there is one tip I would give you for maximum performance, it would be to account for the Hamiltonian operators in any given vector space (ie, ... a particular maneuver). The iterations will be self apparent ....

@Aurelius: What in gods name are you talking about?

 

Two points:

 

1. "Hamiltonian operators" have absolutely nothing to do with the AI application described above. This is word-salad, designed to impress.

I am not fooled. You've written elsewhere that you teach electrical engineering. How is it that you now claim to teach "neural networks" to undergraduates? You write that you have "engineered several networks". This is not remotely the way that people working in this field describe their work. Before posting this comment, and in an effort to give you the benefit of the doubt, I reviewed the machine learning section of your website where you describe "improving networks designed by others." This simply makes no sense. A person who actually does this work would never discuss improving a neural network, as if it were a physical object.

 

2. As an actual machine learning engineer, I find it utterly implausible that you have deployed trained neural works to Il2 or DCS multiplayer. Even more unlikely is your claim that the trained models exceeded or even approached human level performance. And there's a simple reason why I know this: neither game supports the extremely high level of time compression needed to simulate the millions of flight hours necessary to train such a mode without, ya know, waiting for millions of actual hours.

 

Whats amusing about this is that the original poster recognized this exact problem. And what was his solution: to train the model in Unity, which does have APIs for such things! But before you suggest that you did the same, I'll point out that while Unity might provide a physics environment robust enough to train a flight model, it does not provide any such equivalent for the complex reality of DCS combat. In short, what you are describing is impossible, although I doubt you knew it when you authored your comment.

 

Beyond the above, lest anyone think that I'm being unneccssarily mean spirited and failing to give the benefit of the doubt, I'd direct you, the reader, to the absurd "technical assistance" page on @Aurelius's website: http://jaytheskepticalengineer.com/books/the-song-of-kiri/song-of-kiri-technical-assistance/.

This is just a bridge too far. I mean, seriously, what kind of person devotes an entire page of a personal website to anonymous, but mostly self congratulatory, statements of gratitude while plastering said webpage with images from an assortment of institutions designed to impress? All of this in the context of a book that appears not to actually exist?

 

I don't know who you are, and I don't much care, but people lying on the internet is grating. Do us all a favor and keep it off these forums.


Edited by Someone
Link to comment
Share on other sites

So, this is generally not responsive my the points I made above, although it actually does a pretty good job a distracting from the actual argument at hand.

 

Lets go through it point by point.

 

1. Hamiltonians: What you wrote is 100% not relevant to this discussion. This discussion is not about whether you know more about physics than I do. It's about whether you are misrepresenting yourself here.

 

2/3. Numbers of hours necessary to train a model / "Who says you have to iterate in the same environment": I'm combining these two because they are related, and do a better job than I ever could demonstrating your lack of experience in this field.

But, good news, we have a way to resolve this: Share the code! Just post it publicly on github, or wherever you'd like.

 

4. The book. This one is interesting. You write that I am making an attack on you, and that its very reasonable to give thanks to experts consulted in the course of writing a book. And that's right! It is normal! It's also not what I was pointing out in my comment, though. What I pointed out was the conspicuous use of university logos, and the very existence of a "thank you" page for a book that, as you point out, is not yet finished. The fact that the experts are not named makes it all the more conspicuous. Are you saying that you expect these people to come to your website to receive your gratitude? I dunno...feels weird to me.

Regarding the end of this section where you discuss St Judes,moustaches, catapults, and kids toys: I have absolutely no idea what you're talking about here.

 

 

5. I am angry at you because your AI shot down my plane in DCS. Hahahahah oh my goodness what are you talking about?

Up until this point, I thought you were actually doing a good job of skillfully deflecting my argument while casting me as an angry person making personal attacks on the internet (I am/was not). But here, we just go completely off the rails.

The thing, though, is that I have actually never wondered if the enemy who killed me as an AI bot.

Do you know why this is the case?

Because it is always an AI bot that shoots my down! Literally, every time I get killed in DCS, I am killed by an AI bot.

The reason I know this is because I play on GAW/PGAW, and that environment is PvE. So yea, I am for sure not angry at your AI bot for killing me.

 

But even after you suggest that I have this weird ax to grind with you and your imaginary AI bot, you go on, about... humans and dogfighting and the singularity?

I mean, sure, I 100% agree with you... Automation is coming for most things, thats true, including fighter jet pilots. That's what makes the field so exciting right now. It's what I work on every day, although unfortunately not fighter jet AI...that would be neat.

 

 

So, to wrap up, instead of writing more paragraphs in the forums of a Russian made fighter jet simulation, lets just resolve it amicably.

 

All I need are three things:

1) The code you wrote to train your model

2) The name of the university where you are a faculty member. Given that academics are very social, you have no doubt published, your website solicits business, and has a photo of your face, I assume this wont be a problem.

3) A private 1v1 session against the AI bot, so that it can slaughter me and I can gain appreciation for my coming robot overlords.

 

Looking forward to your response!

Link to comment
Share on other sites

Your probably speaking in jest but I doubt many humans would want to fly on such a server. It would be worse than Sven Carlsen (current world chess champion) versus IBM's Watson. In the AI trials I deployed a few years back on IL2 BOS and IL2 1946 servers, the AI wiped the floor with human opponents. It was not even close.

 

Couldn’t you dumb it down to make it more realistic ie human powered. Forget it after your other posts I don’t care anymore


Edited by BlacleyCole

BlackeyCole 20years usaf

XP-11. Dcs 2.5OB

Acer predator laptop/ i7 7720, 2.4ghz, 32 gb ddr4 ram, 500gb ssd,1tb hdd,nvidia 1080 8gb vram

 

 

New FlightSim Blog at https://blackeysblog.wordpress.com. Go visit it and leave me feedback and or comments so I can make it better. A new post every Friday.

Link to comment
Share on other sites

Jay,

 

It seems that you are saying to my questions, in order:

1) No

2) No

3) No

 

Am I misunderstanding anything?

 

You discuss open sourcing in strange terms. People open source code that was time consuming to write all the time! That’s the whole point! It’s also worth pointing out that, if what you say is true, the valuable asset you have is the data collected from thousands of hours of simulation, not the code. I’m not asking for the data. So again, I don’t see the concern.

 

But anyway, just to be clear, are you saying that if I identify myself by my full name and perhaps a link to my LinkedIn profile, along with some code I wrote but have not previously published, that you will do the same?

Link to comment
Share on other sites

Just wanted to add that I am thoroughly enjoying the back and forth between @someone and @aurelius. They each make fine arguments and I find myself nodding along as I read them. My judgement of who is the fraud and what is real keeps ping-ponging back and forth.

 

I could probably dig deeper, check out the website, etc. but most people wouldn't, and this is a great example showing how information found on the internet can't be trusted. It also highlights that smart arguments are convincing. What a dangerous combination, exploited especially by politicians.

 

What ever will we do as a society? We need lie detectors on our laptops with sources requiring their use while posting.

Link to comment
Share on other sites

If you peel off the layering, ... at the bottom is really just profit. Who profits and who gets to watch from the sideline.

 

When you share your source code, the whole HUMANITY profits. As smart as you're trying to come of in this thread, that corporation/prioprietary/re-engineering/profit is just pure BS and you know it.

My controls & seat

 

Main controls: , BRD-N v4 Flightstick (Kreml C5 controller), TM Warthog Throttle (Kreml F3 controller), BRD-F2 Restyling Bf-109 Pedals w. damper, TrackIR5, Gametrix KW-908 (integrated into RAV4 seat)

Stick grips:

Thrustmaster Warthog

Thrustmaster Cougar (x2)

Thrustmaster F-16 FLCS

BRD KG13

 

Standby controls:

BRD-M2 Mi-8 Pedals (Ruddermaster controller)

BRD-N v3 Flightstick w. exch. grip upgrade (Kreml C5 controller)

Thrustmaster Cougar Throttle

Pilot seat

 

 

Link to comment
Share on other sites

It does raise some interesting questions regarding publicly-funded universities holding patents , a reality with which i am personally uncomfortable .

9700k @ stock , Aorus Pro Z390 wifi , 32gb 3200 mhz CL16 , 1tb EVO 970 , MSI RX 6800XT Gaming X TRIO , Seasonic Prime 850w Gold , Coolermaster H500m , Noctua NH-D15S , CH Pro throttle and T50CM2/WarBrD base on Foxxmounts , CH pedals , Reverb G2v2

Link to comment
Share on other sites

Yes, ... I am saying:

 

1 - No, ... we are not releasing proprietary code to you just because you personally desire it unless you can demonstrate a very, very good reason. (and so far you have not) Much of the stuff we work on is not going to be open source anytime soon.

2 - You can find me in 10 to 30 seconds in any major search engine. The reason I do not blurt it out directly is as an intermediary step so that our lab does not attract unwanted inquiries. We are not hidden by any means but we wish to continue our work free from people that might waste large chunks of our time.

3 - No, ... unless like #1, you can demonstrate a very good reason why I or someone else in the lab should set up a server personally to meet your whims. We already have a working collaboration with Google and no offense, but I have never heard of you.

 

Your very forum name, Someone, gives over to the idea that perhaps you are not forthcoming. Personally, I have seen three situations where a researcher developed code only to see that copied or stolen and rewritten for gain in the private market. Not going not happen. About LinkedIn, I distrust that site because it is mined so frequently by the NSA and others, but that is great if you start to put yourself out there.

 

@Anklebiter, I'm glad I've found an audience. As to who is the fraud here, I think it should be pretty clear by the end of this post :)

 

So, I'm going to split this response up into two sections:

1) a final response to @Aurelius, and

2) A discussion of how ML/AI research actually works, the relationship between that research and open source software, and the nature of value as it pertains to machine learning applications. I promise that this will be more interesting than it sounds :)

 

 

 

Section 1

@Aurelius, my goodness, we've come a long way! Let's examine how we got here.

 

First, @Aurelius posted some irrelevant gobbelygook about Hamiltonian Operators, and discussed neural networks in a way that bore no resemblance to the way that researchers in the field discus them. Given that he had previously represented himself as an electrical engineer running a media lab, I raised an eyebrow at his insistence that he is "a neural network specialist." (Note: Again, not the way people discuss such things... a person who does this work would characterize themselves as being an AI researcher, who may or may not employ neural networks in their work, but I digress).

 

In response to his post, I pointed out the above: that @Aurelius is very clearly not a neural network specialist, even per his own previous statements. I also directed readers to a strange page on his website where he gives thanks to anonymous experts for assistance in the writing of a yet-to-be-completed book, but goes to otherwise great lengths to ensure that the logos of these anonymous expert's institutions are pictured.

 

Across the next few messages @Aurelius:

- Calls me a troll

- Implies that I am simply angry about being shot down by his AI bot (I still find this part of the exchange just incredible. Life really can be stranger than fiction)

- Writes some weird stuff about moustaches, St Judes, and childrens toys

- Claims that I could easily google my way to his identity (You can't, I tried. Occam's would conclude that it's because he's, you know, not who he says he is)

- Crucially, fails to dispute any of the assertions of my original post: specifically, that he has represented himself as a member of an entirely different field, and has most certainly not produced the AI bot described in his first post.

 

Attempting to offer you a way out, I propose the following mechanism for verification:

1) Show me the code that was used to train the bot

2) Disclose which university he is affiliated with, so that we can verify that he does in fact run a media lab, and

3) Demonstrate that the bot exists. Preferably by slaughtering me in aerial combat.

 

@Aurelius then pivots, citing the need to protect valuable proprietary software, imply I want to steal his code, weirdly introduces the name of my employer (while simultaneously questioning whether that company does in fact employ me), and asserts that unless I am Mark Zuckerberg (which, maybe I am :)), he will not share his work product.

 

Side note to readers: @Aurelius, not I, introduced the fact that I work at Google. He knows this information because some time ago, he posted a request for qualified collaborators on yet-to-be-defined DCS software projects. I messaged him, privately, and by way of qualification shared my employer and first name. After the first exchange, I never heard back.

It's also worth pointing out here that, despite what @Aurelius is trying to argue: I am not the person who is making extraordinary claims here and my identity is not really relevant.

Finally: my pseudonym is neither more nor less opaque than that of everyone else here.

 

At this point, we're a few thousand words in, and @Aurelius has still offered zero explanation or response to the central argument of my original post.

Which... weird, right? I mean, who spills that much ink in self defense, while not actually offering a defense? And why go to such lengths to re-categorize what was originally described as a hobby project into something so secretive? And also valuable? When all he's gotta do is show the receipts?

I don't know, but I'd expect that the commercial value of a DCS AI bot is exactly zero dollars to anyone not employed by Eagle Dynamics.

 

There's a saying that extraordinary claims demand extraordinary evidence, and though I generally agree with this principle, I am only requesting ordinary evidence. And yet, we've got nothing.

 

So who the hell is this guy? Here's my guess:

 

@Aurelius IS:

A) Probably an electrical engineer of some type. I read his review of the VKB joystick and he seems to have at least some expertise in that field. How much, I am not qualified to say.

B) Probably a staff member, though not a researcher or faculty member, at a university media lab. He's been pretty consistent on this point, and it would be a weird lie.

 

@Aurelius IS NOT:

A) A person who knows a damn thing about neural networks.

 

 

I recognize however, that to some readers the discussion of sharing code, the value of such code, and the need to protect "proprietary information" may seem, on the surface at least, compelling. Following his blanket refusal to produce a shred of evidence, @Aurelius wrote a longer follow up post to @Anklebiter wherein he characterized the state of AI research as "a bit like the guilds of medieval times in Western Europe" and stated that often engineers at places like Apple will steal a piece of code and then collect license fees from the derivative work. He also writes that the value of such models is a function of "mathematics and physics behind the network and how it is implemented exactly", and that a company like Google would be interested in running such things on "large mainframes" or "supercomputers."

 

It would be difficult, if I were trying to do so, for me to conjure a more misinformed view of how AI research works, the appropriateness of sharing code, the physical machines upon which such models run, and the nature of the US patent system.

 

So, this is the going to be our focus in Section 2: Disassembling the characterization of AI research, and the reasonableness of sharing one's code, as offered by @Aurelius.

 

 

Section 2

 

Let's start with a question: what, exactly, is a neural network?

We know there's code involved, but what else?

As it turns out, the process of training a neural network to predict something is somewhat straightforward.

The reason for this is that, despite what Aurelius writes regarding "exact implementations", all AI researchers today use one of two open source frameworks (there are a few others, but these are the only two that really matter).

They are:

a) Tensorflow, a project funded, open sourced, and given away for free by.... you guessed it: Google. https://www.tensorflow.org/

b) Pytorch, a project funded, open sourced, and given away fo free by.... you guessed it again: Facebook! https://pytorch.org/

So when a researcher has an idea for a different type of neural network, the basic building blocks that they use to assemble it are very standardized. To be clear on this point: no one is re-implementing anything. That's the job of the framework.

 

Every once in a while though, someone comes along and advances the state of the art in the field. Which might sound a little bit like what @Aurelius is describing, and ya know, maybe he's got a point?

Maybe researchers are really worried about their work being stolen and appropriated by others?

 

The good news is that, to answer this question, we don't actually have to guess at all!

We can just look at what happens when these advances are made, who makes them, and in what way they are disclosed!

 

And you know what? It turns out that everyone does the same thing:

1)Publish a paper in an academic journal describing what you did, why its different, and how good it is, and how much smarter than everyone else you, the author, are.

2) The code used to train the model.

3) The model itself.

 

But don't just take my word for it: Google researchers, in late 2018, made a giant advancement in a subfield of AI called NLP. NLP stands for Natural Language Processing and is basically the field whose work allows Siri or Alexa to understand and answer your questions (note: I don't mean the part where the speach gets turned into text. that's called Speech Recognition). This new, groundbreaking model architecture was named BERT (a nerdy joke... the previous state of the art model was named ELMO...so yea.... AI researchers love Sesame Street, I guess).

 

And you know what the researchers did: Immediately published a paper disclosing all the details, shared the model, and put the code on Github for anyone to see.

You can look at it here: https://github.com/google-research/bert

 

And if you think that this advance maybe was less recent than Google was letting on, and that Bert was old news by the time they told the rest of the world about it, that would be wrong too.

Here's an article from a few months ago announcing that Google Search engineers had JUST finished incorporating the Bert language model in the core search algorithm: https://www.theverge.com/2019/10/25/20931657/google-bert-search-context-algorithm-change-10-percent-langauge

To be clear about what happened here: Google spilled the beans about a game-changing AI advance a full year before their own co-workers could even put it to use! On purpose!

 

And this is not a weird anomaly. It literally happens all the time. A few months later, researchers at Facebook announced they'd improved upon the Bert model with a new variation called RoBERTa (I know, the names...ugh), and did the same thing: shared the code, the model, and a paper about the details.

Code here: https://github.com/pytorch/fairseq/tree/master/examples/roberta

 

This is just how science works.

 

But hey, I get it, nobody likes to give away valuable stuff for free, so maybe @Aurelius has a point, and sharing his code would be the same as giving away something really valuable.

So do we square this with the fact that Google and Facebook are constantly giving away their code?

Aren't they profit motivated businesses?

Surely they aren't just doing this out of the goodness of their hearts!

 

And you're right! They aren't! The thing is: the code is not worth all that much, and that's why they share it.

What's valuable, and what these companies would not share, is the data that they used to train their models.

And you will notice that I did NOT ask @Aurelius to share his data, either.

 

So what's the relationship between the data, the code, and the thing we call the neural network?

 

Here's an analogy; it might seem a little weird at first, but stick with me.

 

Imagine you are in a kitchen, and you'd like to cook yourself a hamburger. You know what hamburger is and how one should taste, but you don't know how to make it. So you open up a cookbook, turn to the Hamburger Section, and take look at the recipe. And the recipe tells you all sorts of information about how the end product should operate: it should be juicy, topped with pickles, tomato, and lettuce, and served on a bun, etc. Easy!

 

That is the code: the recipe.

 

Having the code/recipe, does not mean you have a hamburger though.

 

To actually make the hamburger, you'll need the ingredients: the meat, tomato, lettuce, pickles, and bun.

That's the data: the ingredients.

 

To make the actual hamburger/neural net, you need both.

And different data/ingredients, processed using the same receipe/code, will produce different tasting hamburgers/neural nets.

 

Or maybe totally different things entirely: imagine you substitute turkey for ground beef! You could still follow the hamburger recipe!

 

So, the code is the commodity, and not worth all that much. AI code is shared freely, and written using the same frameworks. Only the data is sacred.

To drive this point home, consider: If I posted the entire source code of Google search right here, you would be any closer to building a competing search engine because to do so you would need to have the zillions of terabytes of data that google has collected about web search over the past 15 years (not that I'd know....I would be in jail). And that data is not the code.

 

A few small notes, to wrap up:

1) Machine learning/AI models/neural networks do not run on "Mainframes" or "Supercomputers". All work in this field is done either using GPU's (literally the same kind of GPU as the ones you use to run DCS), or specialized processors designed exclusively for these applications. E.g: Google TPU's: .https://cloud.google.com/tpu/

There are lots of technical reasons why this is the case (ie: that we don't train these models using normal CPU's). If anyone cares to know more, PM me and I'm more than happy to elaborate it.

 

2) Regarding @Aurelius claim that, e.g. Apple regularly takes free code, modifies it some, patents it, and then "licenses" the final product: Despite the fact that the US patent system is a horror show, you cannot patent math. And neural networks are math. And all of this code is written using open source frameworks, which are not patentable. If someone, anyone, can show me a single example of a company that has patented and is successfully licensing a neural network building block I will, I dunno, eat my hat, or something.

 

...And that's all I've got. I hope this was at least somewhat interesting.

But even if it wasn't, in world of internet charlatans, at least we've caught one.

 

:)


Edited by Someone
De-caps lock on account of jay referring to my post as a tirade :)
Link to comment
Share on other sites

LOL, your Master's thesis? I read the first line and then memories of how bad the Rise of Skywalker screenplay had been until I compared it to the tirade above. Was laughing too hard to hold my bladder by the end. I should add though that I believe the next rendition should be at least in the 200,000 word range

 

@Aurelius, for you, perhaps a masters thesis. For me, 20 minutes well spent. You really should read it all though. Most of it isn't about you in particular. You might learn something.

 

Otherwise, you have nothing else to say?

Link to comment
Share on other sites

I'm glad you found them useful!

 

 

 

Very interesting indeed....

I would like to code something like that.

A friend of mine made a betting application for mobiles that uses ml model and basically told me that ML is just statistics and that the data set is what is most important

🖥️ R7-5800X3D 64GB RTX-4090 LG-38GN950  🥽  Valve Index 🕹️ VPForce Rhino FFB, Virpil F-14 (VFX) Grip, Virpil Alpha Grip, Virpil CM3 Throttle + Control Panel 2, Winwing Orion (Skywalker) Pedals, Razer Tartarus V2 💺SpeedMaster Flight Seat, JetSeat

CVW-17_Profile_Background_VF-103.png

Link to comment
Share on other sites

Very interesting indeed....

I would like to code something like that.

A friend of mine made a betting application for mobiles that uses ml model and basically told me that ML is just statistics and that the data set is what is most important

 

@VirusAM: The coding is easier than you might expect. You should try walking through some of the tutorials on the Tensorflow or pytorch websites (do yourself a favor and pick pytorch, though. Tensorflow is a nightmare).

 

Re: "ML is just stats", yea, that's not an entirely unreasonable characterization, but it's worth pointing out though that neural networks are where that generalization stops being true. There are entire classes of problems that were mostly intractable until less than 5 years ago, and are only cracked because of these approaches.

 

Specifically, if you want to nerd out: LSTM's and Transformers. They are the sort of dominant architectures right now in the NLP space.

 

CNN's (convolutional neural nets) are an important part of the story also, as pertains to image recognition, and have applications elsewhere.

 

You might keep your friend honest by pointing out that statisticians got precisely nowhere on these problems, and it wasn't for a lack of effort. :)

Link to comment
Share on other sites

Yes, humanity sure would prosper if Putin had the source code to the United States' ICBMs or China had access to Lockheed Martin's F-22 software. Everyone should have access to the source code for your home security door camera and Android/Apple facial recognition phone locks. Yep, .... good point.

 

It sure would prosper. It's called nuclear weapon parity. A concept which should be pretty obvious in this day and age.

 

Everyone has access to the codebase of my open-source video NVR system which in turn runs on an open-source Linux distro. And anyone remotely familiar with IT administration would trust open-source software much more than some proprietary NVR running ActiveX, which most of them do. I can come with a million more examples, really.

 

I have no problems with open-sourced home security. In fact, I'd rather prefer it to be easily auditable and backdoor-free. Something proprietary software is absolutely unable to guarantee.

 

I don't use fingerprint or facial recognition either. I'd rather keep my personal data to myself.

My controls & seat

 

Main controls: , BRD-N v4 Flightstick (Kreml C5 controller), TM Warthog Throttle (Kreml F3 controller), BRD-F2 Restyling Bf-109 Pedals w. damper, TrackIR5, Gametrix KW-908 (integrated into RAV4 seat)

Stick grips:

Thrustmaster Warthog

Thrustmaster Cougar (x2)

Thrustmaster F-16 FLCS

BRD KG13

 

Standby controls:

BRD-M2 Mi-8 Pedals (Ruddermaster controller)

BRD-N v3 Flightstick w. exch. grip upgrade (Kreml C5 controller)

Thrustmaster Cougar Throttle

Pilot seat

 

 

Link to comment
Share on other sites

I'm glad you found them useful!

 

I applaud you, sir. Very well put. :thumbup::thumbup::thumbup:

My controls & seat

 

Main controls: , BRD-N v4 Flightstick (Kreml C5 controller), TM Warthog Throttle (Kreml F3 controller), BRD-F2 Restyling Bf-109 Pedals w. damper, TrackIR5, Gametrix KW-908 (integrated into RAV4 seat)

Stick grips:

Thrustmaster Warthog

Thrustmaster Cougar (x2)

Thrustmaster F-16 FLCS

BRD KG13

 

Standby controls:

BRD-M2 Mi-8 Pedals (Ruddermaster controller)

BRD-N v3 Flightstick w. exch. grip upgrade (Kreml C5 controller)

Thrustmaster Cougar Throttle

Pilot seat

 

 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...