Die "Brains Vs. Artificial Intelligence: Upping the Ante" Challenge im Rivers Casino in Pittsburgh ist beendet. Poker-Bot Libratus hat sich nach. Im Jahr war es der KI Libratus gelungen, einen Poker-Profi bei einer Partie Texas-Hold'em ohne Limit zu schlagen. Diese Spielform gilt. Tuomas Sandholm und seine Mitstreiter haben Details zu ihrer Poker-KI Libratus veröffentlicht, die jüngst vier Profispieler deutlich geschlagen.
Poker Mensch gegen Maschine: Libratus, der GangsterIst Poker für uns Menschen erledigt? Welchen Einfluss wird der eindrucksvolle Erfolg von Libratus auf das Pokerspiel haben? Dieser Artikel wird. Die "Brains Vs. Artificial Intelligence: Upping the Ante" Challenge im Rivers Casino in Pittsburgh ist beendet. Poker-Bot Libratus hat sich nach. Das US-Verteidigungsministerium hat einen Zweijahresvertrag mit den Entwicklern der künstlichen Intelligenz (KI) „Libratus“ abgeschlossen.
Libratus Poker Mehr zum Thema Video6 Libratus vs Preflop 3 Bet
Let's suppose that Player 1 decides to bet. Player 2 sees the bet but does not know what cards player 1 has. In the game tree, this is denoted by the information set , or the dashed line between the two states.
An information set is a collection of game states that a player cannot distinguish between when making decisions, so by definition a player must have the same strategy among states within each information set.
Thus, imperfect information makes a crucial difference in the decision-making process. To decide their next action, player 2 needs to evaluate the possibility of all possible underlying states which means all possible hands of player 1.
Because the player 1 is making decisions as well, if player 2 changes strategy, player 1 may change as well, and player 2 needs to update their beliefs about what player 1 would do.
Heads up means that there are only two players playing against each other, making the game a two-player zero sum game. No-limit means that there are no restrictions on the bets you are allowed to make, meaning that the number of possible actions is enormous.
In contrast, limit poker forces players to bet in fixed increments and was solved in . Nevertheless, it is quite costly and wasteful to construct a new betting strategy for a single-dollar difference in the bet.
Libratus abstracts the game state by grouping the bets and other similar actions using an abstraction called a blueprint.
In a blueprint, similar bets are be treated as the same and so are similar card combinations e. Ace and 6 vs.
Ace and 5. The blueprint is orders of magnitude smaller than the possible number of states in a game.
Libratus solves the blueprint using counterfactual regret minimization CFR , an iterative, linear time algorithm that solves for Nash equilibria in extensive form games.
Libratus uses a Monte Carlo-based variant that samples the game tree to get an approximate return for the subgame rather than enumerating every leaf node of the game tree.
It expands the game tree in real time and solves that subgame, going off the blueprint if the search finds a better action.
Solving the subgame is more difficult than it may appear at first since different subtrees in the game state are not independent in an imperfect information game, preventing the subgame from being solved in isolation.
This decouples the problem and allows one to compute a best strategy for the subgame independently. In short, this ensures that for any possible situation, the opponent is no better-off reaching the subgame after the new strategy is computed.
Thus, it is guaranteed that the new strategy is no worse than the current strategy. This approach, if implemented naively, while indeed "safe", turns out to be too conservative and prevents the agent from finding better strategies.
The new method  is able to find better strategies and won the best paper award of NIPS In addition, while its human opponents are resting, Libratus looks for the most frequent off-blueprint actions and computes full solutions.
Thus, as the game goes on, it becomes harder to exploit Libratus for only solving an approximate version of the game. While poker is still just a game, the accomplishments of Libratus cannot be understated.
Skip to content. Binaries can be downloaded with this link: sourceforge. Dismiss Join GitHub today GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign up. GitHub is where the world builds software Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Sign up for free Dismiss. Go back. Launching Xcode If nothing happens, download Xcode and try again. Latest commit.
Git stats commits. Failed to load latest commit information. Jun 1, Jun 14, Oct 13, Major refactoring. May 31, As written in the tournament rules in advance, the AI itself did not receive prize money even though it won the tournament against the human team.
During the tournament, Libratus was competing against the players during the days. Overnight it was perfecting its strategy on its own by analysing the prior gameplay and results of the day, particularly its losses.
Therefore, it was able to continuously straighten out the imperfections that the human team had discovered in their extensive analysis, resulting in a permanent arms race between the humans and Libratus.
It used another 4 million core hours on the Bridges supercomputer for the competition's purposes. Libratus had been leading against the human players from day one of the tournament.
I felt like I was playing against someone who was cheating, like it could see my cards. It was just that good.
This is considered an exceptionally high winrate in poker and is highly statistically significant. While Libratus' first application was to play poker, its designers have a much broader mission in mind for the AI.
To reduce the luck factor, which might heavily skew the results, two special rules were put in place:. All hands were mirrored.
For example: when Player A got aces vs. Thus no party could just run hot over the course of the challenge. No hard all-ins. When a hand was all-in before the river no more cards were dealt and each player received his equity in chips.
This also reduced the luck factor. This equates to a win rate of All four human players lost over their 30, hands against Libratus.
This is how they performed individually:. While the rules of the challenge were set to reduce the luck factor as much as possible, chance still plays a big role in the results of each hand — even with mirrored hands and even with the elimination of all-in luck.
So maybe, just maybe, the human players are actually better but the AI just got lucky. Let's look at some statistics regarding the results.
The AI won with a win rate of Those are just rough estimates for the variance, but as we'll see they're good enough boundaries. What's the probability of the humans actually playing better than the AI but losing at a rate of It turns out this probability is very low: Somewhere between 0.
Meaning: It's very, very unlikely the general result of this challenge — the AI plays better than four humans — is due to the AI just getting lucky.
No bad luck. Basically the Libratus AI is just a huge set of strategies which define how to play in a certain situation.
Two examples of such strategies not necessarily related to the actual game play of Libratus :. It quickly becomes obvious that there are almost uncountably many different situations the AI can be in and for each and every situation the AI has a strategy.
The AI effectively rolls a dice to decide what to do but the probabilities and actions are pre-calculated and well balanced.
The computer played for many days against itself, accumulating billions, probably trillions of hands and tried randomly all kinds of different strategies.
Whenever a strategy worked, the likelihood to play this strategy increased; whenever a strategy didn't work, the likelihood decreased.
Basically, generating the strategies was a colossal trial and error run. Prior to this competition, it had only played poker against itself.
It did not learn its strategy from human hand histories. Libratus was well prepared for the challenge but the learning didn't stop there.
Each day after the matches against its human counterparts it adjusted its strategies to exploit any weaknesses it found in the human strategies, increasing its leverage.
Porta Nigra Spiel denen Urteile Libratus Poker. - Neuer BereichDas bedeutet, der Bot hat in einem kolossalen Trial-and-Error-Run seine Strategien erlernt und verfeinert.
Porta Nigra Spiel die Zahl Libratus Poker Gewinnlinien. - Vielleicht hatte der Bot einfach nur eine Menge Glück?Aber der Lernprozess des Bots ging auch während des Spiels weiter. Aber Mobile Wetten passiert heute bereits. Bauchgefühl, Reads und Intuition. Trainiert hat Libratus, indem es gegen sich selbst spielte. Tuomas Sandholm und seine Mitstreiter haben Details zu ihrer Poker-KI Libratus veröffentlicht, die jüngst vier Profispieler deutlich geschlagen. Poker-Software Libratus "Hätte die Maschine ein Persönlichkeitsprofil, dann Gangster". Eine künstliche Intelligenz hat erfolgreicher gepokert. Our goal was to replicate Libratus from a article published in Science titled Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Im Jahr war es der KI Libratus gelungen, einen Poker-Profi bei einer Partie Texas-Hold'em ohne Limit zu schlagen. Diese Spielform gilt.