RiotArkem, anti-cheat lead for VALORANT answered a bunch of questions over on the VALORANT subreddit. I've compiled all of the questions and answers below, and took out some repeat questions/answers.
...I'm curious at what level (or if) the system needs human input to make the final 'cheater' decision?
It depends on the detection mechanism, for example if we detect a specific version of a specific known cheat that is trying to manipulate the game executable then we can ban for that immediately without human intervention.
However, for something like an aimbot detection model powered by mouse inputs, the level of confidence required before a ban is much higher. Except in the most blatant cases we're expecting to have analysts investigate players based on the output of the model and then make a determination.
The hope is that banning players for known cheats will give us new training data that can be use to train and refine our model. The refined model can give us new leads to investigate which will lead us to a new cheat that we can directly ban. This new ban wave gives us more training data to refine the model, etc. This gives us a virtuous cycle of training data that increases accuracy and lets us reduce the amount of human supervision required.
Maybe eventually the model could run in a fully autonomous mode with analysts spot checking the output but this is a future ambition. Since the cost of false positives is very high (it's very disruptive for players and undermines trust in the system) there's likely to always be a human in the loop in all but the most egregious cases.
Will there be room for false positives or is it accurate 100% of the time? If there can be false positives how will support deal with these cases?
We're aiming for 100% accuracy especially when it comes to automated banning systems but if there are mistakes we will fix them. All anti-cheat related support tickets are investigated and in the rare case of a false positive we will reverse the ban and apologize.
Will you guys be dishing out manual bans? I find that the current industry doesn't do this and it was one of the major positive points of user dedicated servers in the past. However the false rate was very high because sometimes people let their emotions get involved/egos.
We will be dishing out manual bans but I expect that most of our bans will come from automated systems due to the scale of the playerbase.
I want to empower our player support teams to manually ban for blatant cheats and we also have a team of anti-cheat analysts who investigate players and cheat developers that can give manual bans.
One thing we try to do is to have manual investigations lead to ban waves rather than single bans. If someone needs to be manually banned we want to know why the automated system didn't catch them and use this case to improve the system in the future. That way we can go from one manual investigation of a cheater to a ban for everyone using that same cheat tool.
Without giving away anything confidential, can you go into any detail on the aimbot defection? Would this combat aimbots that have a gradual acceleration curve when it's moving to the target?
There are a bunch of different features that we're examining when it comes to mouse inputs for aimbot detection, acceleration and impulse are two of them and we can use those features to try and model aimbot behavior.
I guess theoretically there could be an aimbot that is indistinguishable from a human but at that point it's basically account sharing between you and your aimbot
Is care being put in to make sure the anticheat doesn't trigger on anti-malware programs going about their business? It's been a common pitfall for new anticheat programs to see an antivirus scanning files and say "aha, hacker!", and ban the person.
We've been doing a lot of compatibility testing to try and decrease the chance of Vanguard conflicting with anti-virus software, other anti-cheat software or unusual computer configurations.
We don't want any conflicts or false positives and will be working hard to fix any that appear.
How are you guys going to handle false positives? Is it going to be like VAC where the decision is final or will an appeal process for manual review take place?
If people feel that they have been wrongly banned they can submit a support ticket and we'll examine their case and check for false positives.
Any news on the ban screen? 🤔 Would love to see it!
We've adjusted the screen a little bit but I don't know if it's in its final form yet (maybe it'll always be a work in progress). I'll see if I can get a new image to show off for everyone soon.
What will toxic players get? Do this system detect bad words, like in league and **** them from existence?
This is a better question for another dev. There are plans for systems to encourage players to be sportsmanlike and punishments for players who are disruptive but I'll leave the details up to my colleagues to share.
How does it detect cheating, and what are some ways that it might recognize cheating?
There are many ways that we can detect cheating, some of the ways involve detecting a specific cheat directly (e.g. "is this computer running cheat.exe"), other methods involve detecting the technical action of the cheat (e.g. "unknownprogram.exe is trying to modify the game, wtf!"), and yet more methods rely on analyzing player behaviors (e.g. "why does x_nooblord_x_420 keep getting running wallbang headshots?" or "why does gangawarrior69's mouse move like a robot?").
In addition we can find cheats through manual investigations, these investigations can start due to suspcious game stats (e.g. "why is this person 100% accurate" or "why is this person's winrate 95%"), due to reports (e.g. "literally everyone is reporting this person for cheating") or due to out of game research or tips (e.g. "what's up with this BuyVALORANThacks.com.au website?").
Did you test the anti-cheat out? Like did you, or someone else, download a cheating software and play the Dev version of VALORANT to see if it worked?
During development I built an aimbot and a wallhack to test some of our security systems and so did some of my colleagues (and some consultants we hired). We also tested against some generic cheats that can work on multiple games. The feedback from these exercises have been useful in refining our security.
...does VALORANT have a chat feature, like in Overwatch or Rainbow Six?
VALORANT does have a communication system to help you coordinate with your team but I'll leave the details to another dev to discuss.
Why would they ban PC temporarily and not permanently? (Referring to recent confirmation that Riot will give out temporary hardware bans to hackers).
Some good reasons in this thread but I wanted to add another one, if hardware bans are temporary we feel more comfortable handing them out frequently.
We're still happy to give out permanent hardware bans to repeat offenders but I think for a first offense a temporary hardware ban is a good addition to an account ban.
Can we expect an API for Third Party apps on release or soon after?
We're still working on this but the plan is to have a developer API at or shortly after launch.
Will the detection process happen in-game or post-game? If it's in-game, could it eject someone while he's cheating? If it's post-game, how much time would it approximately take to detect and ban the player?
How quickly a player is detected will depend on the specific method we're using to find the cheat.
Something like a hard detection for a known cheat is very quick and we can have a very high confidence of the result so we can eject the player and cancel the match (you can see an example of this with the "Match Terminated" screen in the original trailer).
For a behavioral detection or a manual investigation it can take longer before we're confident enough to apply the penalty. In those cases the player might be between matches or in a different match when the penalty is applied.
How will hardware bans works in South Korea though where quite a lot of people play on the same computer in PC Bangs? OW had a huge problem with cheaters using those computers because they did not require you to buy the game, so they could just make new accounts over and over.
One of the reasons that we're going to be giving out temporary hardware ID bans in is to limit the collateral damage that banning shared PCs would have.
Our strategy is that hardware bans don't adhere to accounts. So if you try to play from a banned machine your account doesn't get automatically banned, you just can't play from it (if this happens to your account a lot we'd probably investigate it though). This means that if a PC Bang computer is hardware banned that it's bad for the operator of the PC Bang and annoying for the player trying to play but no lasting damage is done.
That said, we will strongly encourage PC Bang operators to monitor their PCs for cheats and are willing to take action against organizations that are profiting off cheating.
Any plans to implement a trustfactor or 'prime' system where people link their phonenumber or ID or something to verify their accounts? Or a system where people pay like in cs, to avoid playing with fresh accounts?
We're definitely open to the idea of a prime system or a trust factor like system. We haven't built one yet but it's one of the tools we're considering for the future.
Will people get banned if I have a cheat program running for other single player games (like trainers and such)?
In general we won't ban you for running cheats for other games. The caveat here is that some tools are useful for cheating at multiple games (CheatEngine is one example).
If you use CheatEngine to cheat at Binding of Isaac or whatever (I've heard of people using it as a regular debugger for programming too) we don't mind but please don't have it running while you're trying to play VALORANT or we will assume that you want to cheat in this game too.
In CSGO there is a program I use called VibranceGUI and it uses my GPU to change the vibrance in various programs on my computer ...I am wondering if using a program like a Vibrance enhancer would end up tripping the anticheat system and cause any issues?
I haven't tested VibranceGUI itself but I'm pretty sure it will be compatible. At the very least I can tell you that we haven't deliberately gone out of our way to block or detect it.
Where it can get tricky is that some non-cheat applications can use the same techniques that a cheat would use. In those cases we won't ban you for using the application but it might get blocked by our protective measures.
I wonder if their AI is based on Valves implementation. My understanding was that Valve was preparing to make their system available for other developers, but I guess it would make sense to restrict it to games running steamworks or something.
Our AI experiments are distinct from Valve's but we're definitely paying attention to what Valve is doing.
I really enjoyed John McDonald's GDC talk about VacNET and I had the pleasure of meeting John and talking to him about it a while back.
...can you tell us an estimate to when the game will release? Atleast which quarter of the year.
We're planning on releasing the game in Summer 2020 (Northern Hemisphere Summer)