See all of my movie reviews.
Avengers: Infinity War: Whoopee, another Marvel movie comes to save humanity from other more important things that they could be doing.
Thanos is some Big Guy who is collecting the "infinity stones" in order to wipe out half the population of the universe, because they are overpopulating (I'm not sure why, if he can reshape the universe, he doesn't just plan to double the size of the universe, but apparently imagination and power don't always go together). Everyone else, except his unexplained minions, try to stop him.
Within the context of Marvel movies - in other words, if you like Marvel movies - this is a great Marvel movie. While ten thousand main characters stretch the continuity and focus of the film for too much of the time, especially the first, oh, nine tenths - and while you pretty much have to have seen most of the other movies and have read some of the comics to know what the hell is going on, following the plot is never the point of a Marvel movie. Neither is attaining insight, being captivated by character or emotion, or getting inspired or informed. Marvel movies are about snarky humor, cool effects and battle sequences, nonsense uninvolving conflicts, and wish fulfilling superpowers.
Somehow the whole thing mostly holds together. Some of the main characters don't act exactly as they used to, powers and characters, as usual, are conveniently forgotten except when they are needed for a special effect (um ... God of Thunder? If Dr Strange can chop things off with his portal, why not chop off Thanos' hand or continually send him to some other place in the universe?), but the movie occasionally takes you in some directions that you were not expecting. Everyone acts well enough. And there were lots of cool battles and superpowers. So ... cool?
There were some weird problems, other than forgotten powers and characters. Why does no one seem to live in Scotland? How does that new eye work? If these stones were "spread out around the universe", it seems rather convenient that all of them were in our galaxy, and several of them were close to or on Earth.
This movie had a number of scenes involving people having to decide whether to sacrifice themselves or others for the greater good; the potential positive effect of this was ruined by the fact that this "greater good" was "saving half the people in the universe from dying", so the choice was really not much of a choice. Still, it was slightly interesting how some people couldn't make the choice to sacrifice others, while some people could. Maybe I could think about that for a while and learn something.
Within the context of all movies, this movie occupies the same space as nearly all the rest of the Marvel films: inconsequential, untransforming entertainment. You watch them to keep up to speed with a trendy cultural conversation. While I admit that the universe Marvel has created is somewhat rich, and likely to have a lasting effect on the cultural consciousness of this generation, I don't think any of the movies will ever be studied in school outside of a special effects course. There is nothing interesting about any character relations, choices, symbols, or plots in these movies. All you can do is recount the battles, jokes, and powers, and say "cool".
Solo: A Star Wars Story: I expected that this would be the movie in which Star Wars went off the deep end, but, sadly, that already happened with The Last Jedi. Rogue One showed us that the SW formula could be changed and still make a pretty good movie, while The Last Jedi showed us that, no, it really could not. Solo, therefore, was a surprise to me, since it was better than I was expecting.
The story is Solo and a gal named Qi'ra who are born into a poor world and have to commit crimes to survive. They get separated, and Solo finds himself in the army, then in a caper heist, and then in another one. Meanwhile, Qi'ra meets him somewhere between heists and might now be playing for the wrong side. A rag-tag band of scoundrels appear on various different sides of various different conflicts. Cue the betrayals, sleight-of-hands, and counter-betrayals.
Reviewers have not been kind, calling it derivative for not giving us more to Solo's character than we already knew from the other movies. Honestly, I liked that. This was what we saw in Rogue One, and Revenge of the Sith, for that matter.
Other reviewers said the story wasn't particularly interesting. Admittedly, the action sequences were rushed and generic, too much like Marvel movies. On the other hand, the Kessel sequence, which takes up about half of the movie, felt really, really Star Wars, and therefore really, really good. Kudos for that part of the film. Alden Ehrenreich was sometimes so-so as Solo, but occasionally he nailed it. Donald Glover was fantastic as Lando. Emilia Clark was decent as "the woman person in the plot". Woody Harrelson was okay as chief scoundrel, but distracting, since he always acts like Woody Harrelson.
It lacks a light saber battle, which is one of the best things about SW movies. And it lacks the plot development, ease of pace, and mysticism that made the six main SW movies so expansive. But it is competent and enjoyable, it fits into the story, and it sets up a sequel.
Loving: A quiet, moving film about the legal decision to forbid any laws that restrict marriage based on race. The case was Loving vs Virginia. The aptly named Richard Loving (played by Joel Edgerton, who is white) and Mildred Loving (played by Ruth Negga, who is black) got married in DC in the 1960s, but their home state of Virginia refused to recognize the marriage and said it was illegal to live together. They were thrown in jail, briefly, and then out of the state on pain of more jail. After too much time away from their family, Mildred writes a letter to Bobby Kennedy who passes it on to the ACLU, who takes up the case.
Richard is a white male Southerner, a construction worker who patiently and evenly lays bricks, loves his wife, their families, and friends, and wants to be left alone. He is protective of his privacy and balks at the publicity the case brings to them, but, although he briefly protests once in a while,, he wants his wife and kids to be happy. Quiet and unassuming Mildred is no more of a troublemaker than her husband, but, with the protective strength she gets from Richard is willing to fight - just a little - and talk to the media. Richard, from the strength and conviction he eventually learns from Mildred, allows his world to be shaken, just a bit.
The movie has some creepy moments, where you expect something dire to happen to them (as it might in another movie by some other director), but most of these come to no more than threats. It's not an action fest; it's a character study and a small history lesson. Very nice acting and directing, and not at all heavy handed,
Disobedience: Another quiet film, also moving, also nice. This one is set in the London ultra-Orthodox Jewish community, or some facsimile thereof. As usual when I know something about the community that is being portrayed on-screen, I had to grumble during a few scenes that just could not have happened the way they were shown; I'm guessing a few liberties were taken by the screenwriters when adapting the book.
Anyway ... photographer and secular (and apparently bisexual but primarily lesbian) Ronit (Rachel Weisz) returns after years of estrangement from her community for her father the Rav's funeral, after someone has the courtesy to let her know. She finds her not-too-happy to see her cousin Dovid (Alessandro Nivola), the Rav's most prominent student and essentially adopted child is now married to her friend Esti (Rachel McAdams). Esti was Ronit's "more than friend" when they were younger, which is how Ronit came to leave/be banished from the community. Ronit is surprised to find her married to a man, let a lone to Dovid. Is she really happy with him?
Like every other Hollywood film that has Jews in it, this is a "Shylock" film, which means it can't end without one or more of the Jews abandoning their faith, in total or in part, which is what makes for the "happy" part of the ending (a happy ending for a film with Christians in it is for them to resist the temptation and cling to their faith, unless the film is about an abusive authority figure). So I will spoil the movie a little and say, of course Esti and Ronit have a go around, and, even though there is no actual nudity when they do, the scene is hot as hell. This is in contrast to the lovemaking scene that Dovid and Esti share earlier in the film that, despite a little nudity, is incredibly not.
All the characters are played beautifully. Rachel is convincing as Ronit, Rachel shines as Esti (once in a while she doesn't quite sell herself as a woman who has been religious all of her life), and Alessandro does a fine job as Dovid, a job which the director/screenwriter nearly destroys at the end of the film. Bleah. Not a great amount happens in the movie other than in the interior world's of the characters, which is fine. The ending has a number of missteps which was a letdown, because it was quite lovely until then. It's not a terrible ending, just a fumble to squeeze in a few cliche scenes that I think the director thought we wanted to see, rather than the more natural scenes and conclusions that would have made a more satisfying experience. Still a beautifully shot, beautifully acted, nice little film.
Every Day: Another happy surprise, this was better than I was led to believe. It's the story about a ... something named "A" that wakes up every day in a different body. For plot's sake, one day A decides to spend the day with and fall in love with a girl named Rhiannon (Angourie Rice, who looks like the girl who finally gets to kill the serial killer in a horror movie). After a number of other run ins over the next few days (in other bodies, of course), A finally reveals itself to Rhiannon. Cue the skeptical, the attempt at a relationship, the obvious difficulties, and the final decision.
The movie doesn't explain how this is happening, which is fine, and it covers some of the questions and many of the difficulties that A and Rhiannon would face in this situation. Like any good science fiction film, the central element reflects and in reflected by other aspects of what it means to "change", to be constant, to be gender-fluid, to not know where and who someone is, to plan for an uncertain future, and to be yourself. This is reflected in Rhiannon's relationship with her family, her friends, her boyfriend, with A and with and herself.
This movie is little like The Time Traveler's Wife - it's not as good as that movie was, but it's solid, well acted, well plotted, and generally works. It's not a gripping movie: neither A nor Rhiannon are very engaging people; they're both pretty average, if polite and well-meaning. Some parts of A's past are unexplained and leave me wondering: was this body swapping happening while A was in the womb? If not, then who replaced A's original body when A swapped out for the very first time (since A never goes back to the same body)? But more important is the question about the fate of one of the main characters at the end. But I can let that go.
Saturday, 28 March 2020
Roman Update
The Roman count has now reached 4 cohorts, and this batch are ready to go off to their new home
Cohort number 4- Foundry figures -
Monday, 23 March 2020
Friday, 20 March 2020
Assetto Corsa Competizione Intercontinental GT Pack Free Download
Assetto Corsa Competizione is the new official Blancpain GT Series videogame.
Thanks to the extraordinary quality of simulation, the game will allow you to experience the real atmosphere of the GT3 championship, competing against official drivers, teams, cars and circuits reproduced in-game with the highest level of accuracy ever achieved. Sprint, Endurance and Spa 24 Hours races will come to life with an incredible level of realism, in both single and multiplayer modes. Assetto Corsa Competizione will feature Blancpain GT Series' 2018 Season, and will also include Season 2019, that will be provided as a free update during Summer 2019.
Assetto Corsa Competizione is born from KUNOS Simulazioni's long-term experience, and it takes full advantage of Unreal Engine 4 to ensure photorealistic weather conditions and graphics, night races, motion capture animations, reaching a new standard in terms of driving realism and immersion, thanks to its further improved tyre and aerodynamic models.
Designed to innovate, Assetto Corsa Competizione will be set to promote eSports, bringing players at the heart of the Blancpain GT Series and putting them behind the wheel of Ferraris, Lamborghinis, McLarens and many other prestigious GT racing cars, all reproduced with outstanding level of detail.
GAMEPLAY AND SCREENSHOTS :
DOWNLOAD GAME:
♢ Click or choose only one button below to download this game.
♢ View detailed instructions for downloading and installing the game here.
♢ Use 7-Zip to extract RAR, ZIP and ISO files. Install PowerISO to mount ISO files.
Assetto Corsa Competizione Intercontinental GT Pack Free Download
INSTRUCTIONS FOR THIS GAME
➤ Download the game by clicking on the button link provided above.
➤ Download the game on the host site and turn off your Antivirus or Windows Defender to avoid errors.
➤ Once the download has been finished or completed, locate or go to that file.
➤ To open .iso file, use PowerISO and run the setup as admin then install the game on your PC.
➤ Once the installation process is complete, run the game's exe as admin and you can now play the game.
➤ Congratulations! You can now play this game for free on your PC.
➤ Note: If you like this video game, please buy it and support the developers of this game.SYSTEM REQUIREMENTS:
(Your PC must at least have the equivalent or higher specs in order to run this game.)
Minimum:
• Requires a 64-bit processor and operating system
• OS: Windows 7 64-bit Service Pack 1
• Processor: Intel Core i5-4460 or AMD FX-8120
• Memory: 4 GB RAM
• Graphics: GeForce GTX 460 2GB, Radeon HD 7770
• DirectX: Version 11
• Storage: 50 GB available space
• Sound Card: Integrated
Recommended:
• Requires a 64-bit processor and operating system
• OS: Windows 10 64-bit
• Processor: Intel Core i5-8600K or AMD Ryzen 5 2600X
• Memory: 16 GB RAM
• Graphics: GeForce GTX 1070 8 GB, Radeon RX 580 8GB
• DirectX: Version 11
• Storage: 50 GB available space
• Sound Card: Integrated
Supported Language: English, French, Italian, German, Spanish language are available.• Requires a 64-bit processor and operating system
• OS: Windows 7 64-bit Service Pack 1
• Processor: Intel Core i5-4460 or AMD FX-8120
• Memory: 4 GB RAM
• Graphics: GeForce GTX 460 2GB, Radeon HD 7770
• DirectX: Version 11
• Storage: 50 GB available space
• Sound Card: Integrated
Recommended:
• Requires a 64-bit processor and operating system
• OS: Windows 10 64-bit
• Processor: Intel Core i5-8600K or AMD Ryzen 5 2600X
• Memory: 16 GB RAM
• Graphics: GeForce GTX 1070 8 GB, Radeon RX 580 8GB
• DirectX: Version 11
• Storage: 50 GB available space
• Sound Card: Integrated
If you have any questions or encountered broken links, please do not hesitate to comment below. :D
Thursday, 19 March 2020
Exploring Monster Taming Mechanics In Final Fantasy XIII-2: Data Collection
The monster taming aspect of Final Fantasy XIII-2 is surprisingly deep and complex, so much so that I'm interested in exploring it in this miniseries by shoving the monster taming data into a database and viewing and analyzing it with a website made in Ruby on Rails. In the last article, we learned what monster taming is all about and what kind of data we would want in the database, basically roughing out the database design. Before we can populate the database and start building the website around it, we need to get that data into a form that's easy to import, so that's what we'll do today.
Since the website will eventually be in Ruby on Rails, we might as well write this script in Ruby, too. It's not absolutely necessary to write the script in Ruby because it's a one-off deal that will only be run once (when it works) to convert the text file into a format that we can easily import into a database, but Ruby is pretty darn good at text processing, so let's stick with it. I like writing scripts in stages, breaking things down into simple problems and starting with an easy first step, so let's do that here. The simplest thing we can do is read in the text file after saving the FAQ to a local file. To add a bit of debug to make sure we have the file read in, let's scan through and print out the section header for the data we're looking for in the file:
This code also has a few problems that we may or may not want to do anything about. First, it's just hanging out in the middle of nowhere. It's not in a class or function or anything more structured. If this was going to be a reusable parsing tool for converting various FAQs into rows of data, I would definitely want to engineer this code more robustly. But hey, this is a one-off script, and it doesn't need all of that extra support to make it reusable. Over engineering is just a waste of time so we'll leave this code out in the open.
Second, I've got two constant strings hard-coded in those lines: the file name and the search string. I may want to stick the search string in a variable because it's not terribly obvious what "MLTameV" means. The file name, on the other hand, doesn't need to be in a variable. I plan to keep this part of the code quite simple, and it's the obvious loop where the file is read in. On top of that, this code will be very specific to handling this exact file, so I want the file name to be tightly coupled to this loop. If the script is ever copied and modified to work on a different file, this file name string can be changed in this one place to point to the new file that that script works with. I don't see a need to complicate this code with a variable.
Third, when this code runs, it prints out two lines instead of one because there's another instance of "MLTameV" in the table of contents of the file. For locating the place to start parsing monster data, we want the second instance of this string. One way to accomplish this task is with the following code:
We can also represent this FSM with a diagram:
The FSM starts in the Start state, obviously, and it transitions to the Section Tag Found state when there's a matching SECTION_TAG. The unlabeled lines pointing back to the same states mean that for any other condition, the state remains unchanged. This diagram is quite simple, but when the FSM gets more complex, it will definitely help understanding to see it drawn out.
Notice that running through the lines of the text file in the foreach loop became super simple. All that's necessary is to feed each line into the next_state, and assign the return value as the new next_state. The current state is kind of hidden because we're assigning the next_state to itself. Also notice that we need to be careful to always return a valid state in each path of each state method, even if it's the same state that we're currently in. Inadvertently returning something that was not a valid state would be bad, as the FSM is going to immediately try to call it on the next line.
Now that we have an FSM started, it'll be easy to add more states and start working our way through the tamable monster data. What do we need to look for next? Well, we can take a look at the data for one monster and see if there are any defining characteristics:
These regex patterns are useful and powerful, but they can also be quite tricky to get right, especially when they get long and complicated. We'll be using them to pull out all of the data we want from each monster, but we'll try to keep them as simple as possible. The next regex is more complicated, but it will allow us to pull nearly all of the properties for each monster and put it into the empty hash that was added to the list of hashes for that monster. Ready? Here it is:
The first part of the regex, (\w[\w\s\.]*\w), is surrounded by parentheses and is called a capture. A capture will match on whatever the pattern is inside the parentheses and save that matching text so that it can be accessed later. We'll see how that works in the code a little later, but right now we just need to know that this is how we're going to separate out the property name and its value from the full matching text. This particular capture is the property name, and it starts with a letter or number, symbolized with \w. The stuff in the brackets means that the next character can be a letter or number, a space, or a period. Any of those characters will match. Then the following '*' means that a string of zero or more of the preceding character will match. Finally, the property name must end with a letter or number, symbolized with \w again. The reason this pattern can't just be a string of letters and numbers is because some of the property names are multiple words, and the "Lv. 05 Skill" type properties also have periods in them. We want to match on all of those possibilities.
The next part of the regex is -*:\s, which simply means it will match on zero or more '-', followed by a ':', followed by a space. Reviewing the different lines for the MONSTER 001 example above, we can see that this pattern is indeed what happens. Some cases have multiple dashes after the property name, while others are immediately followed by a colon. The colon is always immediately followed by a single space, so this should work well as our name-value separator. It's also outside of any parentheses because we don't want to save it for later.
The last part of the regex is another capture for the property value: (\S+(?:\s\S+)*). The \S+—note the capital S—will match on one or more characters that are not white space. It's the inverse of \s. The next thing in this regex looks like yet another capture, but it has this special '?:' after the open parenthesis. This special pattern is called a grouping. It allows us to put a repeat pattern after the grouping, like the '*' in this case, so that it will match on zero or more of the entire grouping. It will not save it for later, though. Since this grouping is a space followed by one or more non-space characters, this pattern will match on zero or more words, including special characters. If we look at the example monster above, we see that this pattern is exactly what we want for most of the property values. Special characters are strewn throughout, and it would be too much trouble to enumerate them all without risking missing some so we cover our bases this way.
Fairly simple, really. We're going to match on a property name made up of one or more words, followed by a dash-colon separator, and ending with a property value made up of one or more words potentially including a mess of special characters. Note how we couldn't have used the \S character for the property name because it would have also matched on and consumed the dash-colon separator. We also could not have used the [\s\S]* style pattern for the words in the property value because it would have matched on any number of spaces between words. That wouldn't work for the first few lines of the monster properties because there are two name-value pairs on those lines. Now that we have our regex, how do we use those captured names and values, and how exactly is this going to work for the lines with two pairs of properties on them? Here's what the new add_property state looks like with some additional context:
One last thing that we're not handling is those multi-line descriptions and special notes. We need to append those lines to the correct property when we come across them, but how do we do that? Keep in mind that these extra lines won't match on MONSTER_PROP_REGEX, so we can simply detect that non-match, make sure it's not an empty line, and add it to the special notes if it exists or the description if the special notes doesn't exist. Here's what that code looks like in add_property.
Okay, that was a lot of stuff, so let's review. First, we read in the file that we wanted to parse that contains most of the monster taming data we need. Then, we loop through the lines of the file, feeding them into a FSM in order to find the section of the file where the list of monsters is and separate each monster's properties into its own group. Finally, we use a few simple regex patterns to capture each monster's property name-value pairs and add them to a list of hashes that will be fairly easy to print out to a .csv file later. All of this was done in 66 lines of Ruby code! Here's the program in full so we can see how it all fits together:
We still need to write the collected data out to a .csv file so that we can import it into a database, but that is a task for next time. Also, notice that we have done almost no data integrity checks on this input other than what the FSM and regex patterns inherently provide. Any mistakes, typos, or unexpected text in the file will likely result in missing or corrupt data, so we'll need to do some checks on the data as well. Additionally, this data is just the tamable monster data. We still need the other table data for abilities, game areas, monster materials, and monster characteristics. However, this is a great start on the data that was the most difficult to get, and we ended up with quite a few extra properties that we weren't intending to collect in the list. That's okay, I'm sure we'll find a use for them.
Starting a Data Parsing Script
We already identified a good source for most of the data we want to use from the Monster Infusion FAQ post on Gamefaqs.com. However, we don't want to type the thousands of lines of data from this FAQ into our database because we would be introducing human error with the data copying, and the writers of this FAQ have already gone through all of the trouble of entering the data the first time, hopefully without mistakes. Besides, why would we go through such a tedious process when we could have fun writing a script to do the work for us? Come on, we're programmers! Let's write this script.
Since the website will eventually be in Ruby on Rails, we might as well write this script in Ruby, too. It's not absolutely necessary to write the script in Ruby because it's a one-off deal that will only be run once (when it works) to convert the text file into a format that we can easily import into a database, but Ruby is pretty darn good at text processing, so let's stick with it. I like writing scripts in stages, breaking things down into simple problems and starting with an easy first step, so let's do that here. The simplest thing we can do is read in the text file after saving the FAQ to a local file. To add a bit of debug to make sure we have the file read in, let's scan through and print out the section header for the data we're looking for in the file:
File.foreach("ffiii2_monster_taming_faq.txt") do |line|
if line.include? "MLTameV"
puts line
end
end
Already, this code gives the basic structure of what we're trying to do. We're going to read in the file, loop through every line, look for certain patterns, and output what we find that matches those patterns. The real deal will be much more complex, but it's always good to have a working starting point.This code also has a few problems that we may or may not want to do anything about. First, it's just hanging out in the middle of nowhere. It's not in a class or function or anything more structured. If this was going to be a reusable parsing tool for converting various FAQs into rows of data, I would definitely want to engineer this code more robustly. But hey, this is a one-off script, and it doesn't need all of that extra support to make it reusable. Over engineering is just a waste of time so we'll leave this code out in the open.
Second, I've got two constant strings hard-coded in those lines: the file name and the search string. I may want to stick the search string in a variable because it's not terribly obvious what "MLTameV" means. The file name, on the other hand, doesn't need to be in a variable. I plan to keep this part of the code quite simple, and it's the obvious loop where the file is read in. On top of that, this code will be very specific to handling this exact file, so I want the file name to be tightly coupled to this loop. If the script is ever copied and modified to work on a different file, this file name string can be changed in this one place to point to the new file that that script works with. I don't see a need to complicate this code with a variable.
Third, when this code runs, it prints out two lines instead of one because there's another instance of "MLTameV" in the table of contents of the file. For locating the place to start parsing monster data, we want the second instance of this string. One way to accomplish this task is with the following code:
SECTION_TAG = "MLTameV"
section_tag_found = false
File.foreach("ffiii2_monster_taming_faq.txt") do |line|
if section_tag_found and line.include? SECTION_TAG
puts line
elsif line.include? SECTION_TAG
section_tag_found = true
end
end
Now only the section header line is printed when this script is run. However, as what inevitably happens when we add more code, we've introduced a new problem. It may not be obvious right now, but the path that we're on with the section_tag_found variable is not sustainable. This variable is a piece of state that notifies the code when we've seen a particular pattern in the text file so we can do something different afterward. When parsing a text file using state variables like this one, we'll end up needing a lot of state variables, and it gets unmanageable and unreadable fast. What we are going to need instead, to keep track of what we need to do next, is a state machine.Parsing Text with a Finite State Machine
Finite state machines (FSM) are great for keeping track of where you are in a process and knowing which state to go to next, like we need to know in the case of finding the section header for the list of tamable monsters in this text file. In the FSM we always have a current state that is one of a finite number of states, hence the name. Depending on the input in that state, the FSM will advance to a next state and possibly perform some output task. Here is what that process looks like in Ruby for finding the second section tag:
SECTION_TAG = "MLTameV"
section_tag_found = lambda do |line|
if line.include? SECTION_TAG
puts line
end
return section_tag_found
end
start = lambda do |line|
if line.include? SECTION_TAG
return section_tag_found
end
return start
end
next_state = start
File.foreach("ffiii2_monster_taming_faq.txt") do |line|
next_state = next_state.(line)
end
First, the states are defined as lambda methods so that they can easily be passed around as variables, but still called as functions. These variables have to be declared before they're used, so the section_tag_found method either has to be defined first because the start method uses it, or all methods could be predefined at the start of the file and then redefined with their method bodies in any desired order. Another way to define these states would be to wrap the whole thing in a class so that the states are class members, but that kind of design would be more warranted if this FSM was part of a larger system. As it is, this parser will be almost entirely made up of this FSM, so we don't need to complicate things.We can also represent this FSM with a diagram:
The FSM starts in the Start state, obviously, and it transitions to the Section Tag Found state when there's a matching SECTION_TAG. The unlabeled lines pointing back to the same states mean that for any other condition, the state remains unchanged. This diagram is quite simple, but when the FSM gets more complex, it will definitely help understanding to see it drawn out.
Notice that running through the lines of the text file in the foreach loop became super simple. All that's necessary is to feed each line into the next_state, and assign the return value as the new next_state. The current state is kind of hidden because we're assigning the next_state to itself. Also notice that we need to be careful to always return a valid state in each path of each state method, even if it's the same state that we're currently in. Inadvertently returning something that was not a valid state would be bad, as the FSM is going to immediately try to call it on the next line.
Now that we have an FSM started, it'll be easy to add more states and start working our way through the tamable monster data. What do we need to look for next? Well, we can take a look at the data for one monster and see if there are any defining characteristics:
...............................................................................
MONSTER 001
Name---------: Apkallu Minimum Base HP------: 1,877
Role---------: Commando Maximum Base HP------: 2,075
Location-----: Academia 500 AF Minimum Base Strength: 99
Max Level----: 45 Maximum Base Strength: 101
Speed--------: 75 Minimum Base Magic---: 60
Tame Rate----: 10% Maximum Base Magic---: 62
Growth-------: Standard
Immune-------: N/A
Resistant----: N/A
Halved-------: All Ailments
Weak---------: Fire, Lightning
Constellation: Sahagin
Feral Link-----: Abyssal Breath
Description----: Inflicts long-lasting status ailments on target and nearby
opponents.
Type-----------: Magic
Effect---------: 5 Hits, Deprotect, Deshell, Wound
Damage Modifier: 1.8
Charge Time----: 1:48
PS3 Combo------: Square
Xbox 360 Combo-: X
Default Passive: Attack: ATB Charge
Default Skill--: Attack
Default Skill--: Ruin
Default Skill--: Area Sweep
Lv. 05 Skill---: Powerchain
Lv. 12 Passive-: Strength +16%
Lv. 18 Skill---: Slow Chaser
Lv. 21 Skill---: Scourge
Lv. 27 Passive-: Strength +20%
Lv. 35 Passive-: Resist Dispel +10%
Lv. 41 Passive-: Strength +25%
Lv. 42 Passive-: Resist Dispel +44%
Lv. 45 Skill---: Ruinga
Special Notes: Apkallu only spawns twice in Academia 500 AF. If you fail to
acquire its Crystal in both encounters, you will have to close
the Time Gate and replay the area again.
...............................................................................
That series of dots at the beginning looks like a good thing to search for. It repeats at the start of every monster, so it's a good marker for going into a monster state. We'll also want to pass in a data structure that will be used to accumulate all of this monster data that we're going to find. To make it easy to export to a .csv file at the end, we're going to make this data structure an array of hashes, and it looks like this with the new state:
SECTION_TAG = "MLTameV"
MONSTER_SEPARATOR = "........................................"
new_monster = lambda do |line, data|
if line.include? MONSTER_SEPARATOR
return new_monster, data << {}
end
return new_monster, data
end
section_tag_found = lambda do |line, data|
if line.include? SECTION_TAG
return new_monster, data
end
return section_tag_found, data
end
start = lambda do |line, data|
if line.include? SECTION_TAG
return section_tag_found, data
end
return start, data
end
next_state = start
data = []
File.foreach("ffiii2_monster_taming_faq.txt") do |line|
next_state, data = next_state.(line, data)
end
puts data.length
I shortened the MONSTER_SEPARATOR pattern in case there were some separators that were shorter than the first one, but it should still be plenty long to catch all of the instances of separators between monsters in the file. Notice that we now have to pass the data array into and out of each state method so that we can accumulate the monster data in it. Right now it simply appends an empty hash for each monster it finds. We'll add to those hashes in a bit. At the end of the script, I print out the number of monsters found, which we expect to be 164, and it turns out to be a whopping 359! That's because that same separator is used more after the tamable monster section of the file, and we didn't stop at the end of the section. That should be easy enough to fix:SECTION_TAG = "MLTameV"
MONSTER_SEPARATOR = "........................................"
NEXT_SECTION_TAG = "SpecMon"
end_monsters = lambda do |line, data|
return end_monsters, data
end
new_monster = lambda do |line, data|
if line.include? MONSTER_SEPARATOR
return new_monster, data << {}
elsif line.include? NEXT_SECTION_TAG
return end_monsters, data
end
return new_monster, data
end
# ...
I added another state end_monsters that consumes every line to the end of the file, and we enter that state from the new_monster state if we see the NEXT_SECTION_TAG. Now if we run the script again, we get a count of 166 monsters. Close, but still not right. The problem is that there are a couple extra separator lines used in the tamable monster section, one after the last monster and one extra separator after a sub-heading for DLC monsters. We're going to have to get a bit more creative with how we detect a new monster. If we look back at the example of the first monster, we see that after the separator the next text is MONSTER 001. This title for each monster is consistent for all of the monsters, with MONSTER followed by a three digit number. Even the DLC monsters have this tag with DLC in front of it. This pattern is perfect for matching on a regular expression (regex).Finding Monster Data with Regular Expressions
A regex is a text pattern defined with special symbols that mean various things like "this character is repeated one or more times" or "any of these characters" or "this character is a digit." This pattern can be used to search a string of text, which is called matching the regex. In Ruby a regex pattern is denoted by wrapping it in forward slashes (/), and we can easily define a regex for our MONSTER 001 pattern:
SECTION_TAG = "MLTameV"
MONSTER_SEPARATOR = "........................................"
NEXT_SECTION_TAG = "SpecMon"
NEW_MONSTER_REGEX = /MONSTER\s\d{3}/
find_separator = nil
end_monsters = lambda do |line, data|
return end_monsters, data
end
new_monster = lambda do |line, data|
if NEW_MONSTER_REGEX =~ line
return find_separator, data << {}
elsif line.include? NEXT_SECTION_TAG
return end_monsters, data
end
return new_monster, data
end
find_separator = lambda do |line, data|
if line.include? MONSTER_SEPARATOR
return new_monster, data
end
return find_separator, data
end
# ...
The NEW_MONSTER_REGEX is defined as the characters MONSTER, followed by a space (\s), followed by three digits (\d). I changed the new_monster state to look for a match on our new regex, and added a find_separator state to still search for the MONSTER_SEPARATOR. Notice that the FSM will bounce between these two states, so the state that's defined later has to be declared at the top of the file, otherwise Ruby will complain that find_separator is undefined in new_monster.These regex patterns are useful and powerful, but they can also be quite tricky to get right, especially when they get long and complicated. We'll be using them to pull out all of the data we want from each monster, but we'll try to keep them as simple as possible. The next regex is more complicated, but it will allow us to pull nearly all of the properties for each monster and put it into the empty hash that was added to the list of hashes for that monster. Ready? Here it is:
MONSTER_PROP_REGEX = /(\w[\w\s\.]*\w)-*:\s(\S+(?:\s\S+)*)/
We'll break this regex apart and figure out what each piece means separately.The first part of the regex, (\w[\w\s\.]*\w), is surrounded by parentheses and is called a capture. A capture will match on whatever the pattern is inside the parentheses and save that matching text so that it can be accessed later. We'll see how that works in the code a little later, but right now we just need to know that this is how we're going to separate out the property name and its value from the full matching text. This particular capture is the property name, and it starts with a letter or number, symbolized with \w. The stuff in the brackets means that the next character can be a letter or number, a space, or a period. Any of those characters will match. Then the following '*' means that a string of zero or more of the preceding character will match. Finally, the property name must end with a letter or number, symbolized with \w again. The reason this pattern can't just be a string of letters and numbers is because some of the property names are multiple words, and the "Lv. 05 Skill" type properties also have periods in them. We want to match on all of those possibilities.
The next part of the regex is -*:\s, which simply means it will match on zero or more '-', followed by a ':', followed by a space. Reviewing the different lines for the MONSTER 001 example above, we can see that this pattern is indeed what happens. Some cases have multiple dashes after the property name, while others are immediately followed by a colon. The colon is always immediately followed by a single space, so this should work well as our name-value separator. It's also outside of any parentheses because we don't want to save it for later.
The last part of the regex is another capture for the property value: (\S+(?:\s\S+)*). The \S+—note the capital S—will match on one or more characters that are not white space. It's the inverse of \s. The next thing in this regex looks like yet another capture, but it has this special '?:' after the open parenthesis. This special pattern is called a grouping. It allows us to put a repeat pattern after the grouping, like the '*' in this case, so that it will match on zero or more of the entire grouping. It will not save it for later, though. Since this grouping is a space followed by one or more non-space characters, this pattern will match on zero or more words, including special characters. If we look at the example monster above, we see that this pattern is exactly what we want for most of the property values. Special characters are strewn throughout, and it would be too much trouble to enumerate them all without risking missing some so we cover our bases this way.
Fairly simple, really. We're going to match on a property name made up of one or more words, followed by a dash-colon separator, and ending with a property value made up of one or more words potentially including a mess of special characters. Note how we couldn't have used the \S character for the property name because it would have also matched on and consumed the dash-colon separator. We also could not have used the [\s\S]* style pattern for the words in the property value because it would have matched on any number of spaces between words. That wouldn't work for the first few lines of the monster properties because there are two name-value pairs on those lines. Now that we have our regex, how do we use those captured names and values, and how exactly is this going to work for the lines with two pairs of properties on them? Here's what the new add_property state looks like with some additional context:
# ...
MONSTER_PROP_REGEX = /(\w[\w\s\.]*\w)-*:\s(\S+(?:\s\S+)*)/
find_separator = nil
new_monster = nil
end_monsters = lambda do |line, data|
return end_monsters, data
end
add_property = lambda do |line, data|
props = line.scan(MONSTER_PROP_REGEX)
props.each { |prop| data.last[prop[0]] = prop[1] }
return new_monster, data if line.include? MONSTER_SEPARATOR
return add_property, data
end
new_monster = lambda do |line, data|
if NEW_MONSTER_REGEX =~ line
return add_property, data << {}
elsif line.include? NEXT_SECTION_TAG
return end_monsters, data
end
return new_monster, data
end
# ...
The double-property lines are handled with a different type of regex matcher, line.scan(MONSTER_PROP_REGEX). This scan returns an array of all of the substrings that matched the given regex in the string that it was called on. Conveniently, if the regex contains captures, the array elements are themselves arrays of each of the captures. For example, the scan of the first property line of our MONSTER 001 results in this array:[['Name', 'Apkallu'],['Minimum Base HP', '1,877']]
We can simply loop through this array, adding property name and property value to the last hash in the list of hashes. Then, if the line was actually the MONSTER_SEPARATOR string, it didn't match any properties and we'll move on to the next monster. Otherwise, we stay in the add_property state for the next line.One last thing that we're not handling is those multi-line descriptions and special notes. We need to append those lines to the correct property when we come across them, but how do we do that? Keep in mind that these extra lines won't match on MONSTER_PROP_REGEX, so we can simply detect that non-match, make sure it's not an empty line, and add it to the special notes if it exists or the description if the special notes doesn't exist. Here's what that code looks like in add_property.
MONSTER_PROP_EXT_REGEX = /\S+(?:\s\S+)*/
# ...
add_property = lambda do |line, data|
props = line.scan(MONSTER_PROP_REGEX)
props.each { |prop| data.last[prop[0]] = prop[1] }
return new_monster, data if line.include? MONSTER_SEPARATOR
ext_line_match = MONSTER_PROP_EXT_REGEX.match(line)
if props.empty? and ext_line_match
if data.last.key? 'Special Notes'
data.last['Special Notes'] += ' ' + ext_line_match[0]
else
data.last['Description'] += ' ' + ext_line_match[0]
end
end
return add_property, data
end
By putting the extra code after the return if the line is the MONSTER_SEPARATOR, we can assume that this line is not the MONSTER_SEPARATOR and just check if the MONSTER_PROP_REGEX didn't match and there's something on the line. Then decide on which property to add the line to, and we're good to go.Okay, that was a lot of stuff, so let's review. First, we read in the file that we wanted to parse that contains most of the monster taming data we need. Then, we loop through the lines of the file, feeding them into a FSM in order to find the section of the file where the list of monsters is and separate each monster's properties into its own group. Finally, we use a few simple regex patterns to capture each monster's property name-value pairs and add them to a list of hashes that will be fairly easy to print out to a .csv file later. All of this was done in 66 lines of Ruby code! Here's the program in full so we can see how it all fits together:
SECTION_TAG = "MLTameV"
MONSTER_SEPARATOR = "........................................"
NEXT_SECTION_TAG = "SpecMon"
NEW_MONSTER_REGEX = /MONSTER\s\d{3}/
MONSTER_PROP_REGEX = /(\w[\w\s\.]*\w)-*:\s(\S+(?:\s\S+)*)/
MONSTER_PROP_EXT_REGEX = /\S+(?:\s\S+)*/
find_separator = nil
new_monster = nil
end_monsters = lambda do |line, data|
return end_monsters, data
end
add_property = lambda do |line, data|
props = line.scan(MONSTER_PROP_REGEX)
props.each { |prop| data.last[prop[0]] = prop[1] }
return new_monster, data if line.include? MONSTER_SEPARATOR
ext_line_match = MONSTER_PROP_EXT_REGEX.match(line)
if props.empty? and ext_line_match
if data.last.key? 'Special Notes'
data.last['Special Notes'] += ' ' + ext_line_match[0]
else
data.last['Description'] += ' ' + ext_line_match[0]
end
end
return add_property, data
end
new_monster = lambda do |line, data|
if NEW_MONSTER_REGEX =~ line
return add_property, data << {}
elsif line.include? NEXT_SECTION_TAG
return end_monsters, data
end
return new_monster, data
end
find_separator = lambda do |line, data|
if line.include? MONSTER_SEPARATOR
return new_monster, data
end
return find_separator, data
end
section_tag_found = lambda do |line, data|
if line.include? SECTION_TAG
return find_separator, data
end
return section_tag_found, data
end
start = lambda do |line, data|
if line.include? SECTION_TAG
return section_tag_found, data
end
return start, data
end
next_state = start
data = []
File.foreach("ffiii2_monster_taming_faq.txt") do |line|
next_state, data = next_state.(line, data)
end
And here's the corresponding FSM diagram:We still need to write the collected data out to a .csv file so that we can import it into a database, but that is a task for next time. Also, notice that we have done almost no data integrity checks on this input other than what the FSM and regex patterns inherently provide. Any mistakes, typos, or unexpected text in the file will likely result in missing or corrupt data, so we'll need to do some checks on the data as well. Additionally, this data is just the tamable monster data. We still need the other table data for abilities, game areas, monster materials, and monster characteristics. However, this is a great start on the data that was the most difficult to get, and we ended up with quite a few extra properties that we weren't intending to collect in the list. That's okay, I'm sure we'll find a use for them.
TOP 10 MOVIES OF 2019
The new year is here, and so Top 10 season is upon us. The tradition is to rank media in a seemingly arbitrary fashion so here's my oh-so personal list of moves faves that came out 2019. What will be number 1? Read on to find out...
Read more »
Monday, 16 March 2020
Tech Book Face Off: Python For Data Analysis Vs. Python Data Science Handbook
I'm starting to dabble in machine learning. (You know it's all the rage now.) As with anything new, I find it most effective to pick out a couple of books on the subject and start learning the landscape and the details straight away. Online resources are good for an introduction, or to find answers to specific questions on how to get a particular task done, but they don't hold a candle to the depth and focus that you can find from reading about a subject in a well-written book. Since I'd already had some general exposure to machine learning in college, I wanted to work through a couple of books that focused on how to do data analysis and machine learning in a practical sense with a real language and modern tools. Python with Pandas and Scikit-Learn has a huge community and plenty of active development right now, so that's the route I went with for this pair of books. I selected Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython by Wes McKinney to get the details of using the Pandas data analysis package from the author of the package himself. Then I chose Python Data Science Handbook: Essential Tools for Working with Data by Jake VanderPlas to get more coverage of Pandas from another perspective and expand into some of the Scikit-Learn tools available for machine learning. Let's see how these two books stack up for learning to make sense of large amounts of data.
The book starts out with the perfunctory chapters on installing Python and other packages, how to use IPython and Jupyter Notebooks, and running through the basic Python language features. It's filler chapters like these in nearly every programming book out there that makes me think that I no longer need to read introductory books on new languages. I can just go directly into books on applications of any given language, confident that they'll introduce me to the syntax and features I need to know anyway. It's not wrong, exactly, but the result is an awful lot of books with the same extra introductory material filling up pages that will mostly go unread.
Then there's a big chapter on using NumPy before moving on to Pandas for the rest of the book, with a chapter on the Matplotlib graphing library thrown in somewhere in the middle. The main focus is on Pandas, which is a huge library with tons of invaluable features for messing around with data. The book covers everything from reading and writing data, data cleaning, combining and merging data in various ways, doing complex calculations on the data with aggregation and groupby operations, and working with time series and categorical data.
The number and types of operations you can do on a data set with Pandas is pretty incredible, and that makes Pandas an excellent library to learn to use well. As McKinney says in the book,
Beyond covering all of the ins and outs of Pandas, McKinney sprinkles in a few good tips on other tools that can speed up your data analysis tasks. For instance,
Other than these scattered tips, the book is actually fairly dry and uninspiring. It reads a lot like the (excellent) online documentation for Pandas, but doesn't add too much more than that. Even most of the examples for different features are just drab randomly generated numbers with boring labels. You could just as easily read the online docs and get all of the same material. It may be a little nicer to have it all in book form so that you can sit down and focus on it, but that's a slight advantage. I was hoping for something more, that secret sauce that you sometimes find in books on software libraries, to make the book a greater value than just reading the online docs.
VS. |
Python for Data Analysis
This book covers all of the fundamentals of doing data analysis with Python using IPython, Jupyter Notebooks, Matplotlib graphing, and the main data analysis packages: NumPy and Pandas. It stops short of going into the other major data analysis and machine learning library, Scikit-Learn, because it had already filled over 500 pages with the intricate details of NumPy and Pandas. Wes McKinney is the original author of the Pandas library, so we're getting all of those details straight from the source.The book starts out with the perfunctory chapters on installing Python and other packages, how to use IPython and Jupyter Notebooks, and running through the basic Python language features. It's filler chapters like these in nearly every programming book out there that makes me think that I no longer need to read introductory books on new languages. I can just go directly into books on applications of any given language, confident that they'll introduce me to the syntax and features I need to know anyway. It's not wrong, exactly, but the result is an awful lot of books with the same extra introductory material filling up pages that will mostly go unread.
Then there's a big chapter on using NumPy before moving on to Pandas for the rest of the book, with a chapter on the Matplotlib graphing library thrown in somewhere in the middle. The main focus is on Pandas, which is a huge library with tons of invaluable features for messing around with data. The book covers everything from reading and writing data, data cleaning, combining and merging data in various ways, doing complex calculations on the data with aggregation and groupby operations, and working with time series and categorical data.
The number and types of operations you can do on a data set with Pandas is pretty incredible, and that makes Pandas an excellent library to learn to use well. As McKinney says in the book,
During the course of doing data analysis and modeling, a significant amount of time is spent on data preparation: loading, cleaning, transforming, and rearranging. Such tasks are often reported to take up 80% or more of an analyst's time.With all of that time spent on low-level data tasks, Pandas makes the life of a data scientist so much easier and more enjoyable. Data can be cleaned and transformed much more easily and reliably, and you can get down to making inferences about the data quickly.
Beyond covering all of the ins and outs of Pandas, McKinney sprinkles in a few good tips on other tools that can speed up your data analysis tasks. For instance,
If you work with large quantities of data locally, I would encourage you to explore PyTables and h5py to see how they can suit your needs. Since many data analysis problems are I/O-bound (rather than CPU-bound), using a tool like HDF5 can massively accelerate your applications.
Other than these scattered tips, the book is actually fairly dry and uninspiring. It reads a lot like the (excellent) online documentation for Pandas, but doesn't add too much more than that. Even most of the examples for different features are just drab randomly generated numbers with boring labels. You could just as easily read the online docs and get all of the same material. It may be a little nicer to have it all in book form so that you can sit down and focus on it, but that's a slight advantage. I was hoping for something more, that secret sauce that you sometimes find in books on software libraries, to make the book a greater value than just reading the online docs.
The book does have a chapter at the end that goes through some extended examples of data wrangling with publicly available data sets, which is a nice way of bringing everything together, but it's a small part of a large book. All in all, it's a no-nonsense, comprehensive exploration of the Pandas library, but not too much more than that. I wouldn't recommend it because there are better options out there that add something more than the online documentation can give you, like the next book.
The Python Data Science Handbook covers most of what Python for Data Analysis does with somewhat less depth, but then goes much further into using Scikit-Learn to analyze data sets with machine learning techniques. The book is split into five large chapters, only the first of which delves into introductory minutiae by introducing the IPython interpreter. Thankfully, the book assumes you know Python already and doesn't bore the reader with another summary of lists, dicts, and comprehensions.
The next few chapters cover the use of NumPy, Pandas, and Matplotlib, and while the Pandas material is somewhat reduced from Python for Data Analysis, the Matplotlib material actually gets into the cartography drawing capabilities of this library. So, there are trade-offs in the number of topics covered in this book, as I would say the author gives more breadth while sacrificing some depth. The last chapter explores a good amount of Scikit-Learn with explanations and discussions of ten different machine learning models. This chapter added significantly to the book, grounding the features explored in the previous chapters with machine learning applications on real data sets of hand-written digits, bicycle traffic, and facial recognition. Seeing how different models performed better or worse in different applications was fascinating and enlightening.
The writing style of Jake VanderPlas was much more engaging as well. While reading the book, I felt like I was being guided by a mentor who wanted to make sure I understood the reasons behind different decisions, and why things should be done a certain way. While Python for Data Analysis focused on the "what" and "how" of programming with Pandas, the Python Data Science Handbook really addressed the "why" of data science programming, from explaining some of the reasons behind little decisions:
These two books, Python for Data Analysis and Python Data Science Handbook, clearly only scratch the surface of machine learning. They teach you how to use the main Python libraries for data analysis and machine learning, but they don't go much further than that. There's a ton more stuff to learn about how to do machine learning well and what goes on under the hood in all of these various models. I've got my eye on more machine learning books like Python Machine Learning by Sebastian Raschka, Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron, and The Elements of Statistical Learning by Trevor Hastie, et al, among many others. There's a vast amount of literature out there now on machine learning, covering everything from practical applications to the theoretical underpinnings of the models. Suffice it to say, this is only the beginning of the exploration.
Python Data Science Handbook
The Python Data Science Handbook covers most of what Python for Data Analysis does with somewhat less depth, but then goes much further into using Scikit-Learn to analyze data sets with machine learning techniques. The book is split into five large chapters, only the first of which delves into introductory minutiae by introducing the IPython interpreter. Thankfully, the book assumes you know Python already and doesn't bore the reader with another summary of lists, dicts, and comprehensions.
The next few chapters cover the use of NumPy, Pandas, and Matplotlib, and while the Pandas material is somewhat reduced from Python for Data Analysis, the Matplotlib material actually gets into the cartography drawing capabilities of this library. So, there are trade-offs in the number of topics covered in this book, as I would say the author gives more breadth while sacrificing some depth. The last chapter explores a good amount of Scikit-Learn with explanations and discussions of ten different machine learning models. This chapter added significantly to the book, grounding the features explored in the previous chapters with machine learning applications on real data sets of hand-written digits, bicycle traffic, and facial recognition. Seeing how different models performed better or worse in different applications was fascinating and enlightening.
The writing style of Jake VanderPlas was much more engaging as well. While reading the book, I felt like I was being guided by a mentor who wanted to make sure I understood the reasons behind different decisions, and why things should be done a certain way. While Python for Data Analysis focused on the "what" and "how" of programming with Pandas, the Python Data Science Handbook really addressed the "why" of data science programming, from explaining some of the reasons behind little decisions:
One guiding principle of Python code is that "explicit is better than implicit." The explicit nature of loc and iloc make them very useful in maintaining clean and readable code; especially in the case of integer indexes, I recommend using these both to make code easier to read and understand, and to prevent subtle bugs due to the mixed indexing/slicing convention.To carefully describing the big issues with training machine learning models:
The general behavior we would expect from a learning curve is this: A model of a given complexity will overfit a small dataset: this means the training score will be relatively high, while the validation score will be relatively low. A model of a given complexity will underfit a large dataset: this means that the training score will decrease, but the validation score will increase. A model will never, except by chance, give a better score to the validation set than the training set: this means the curves should keep getting closer together but never cross.This conversationally instructive style was quite comfortable, and made the whole book an enjoyable read, even though the material was understandably complicated with a lot of different features and concerns to think about. VanderPlas helped it all go down easily. It was a lot to take in, but it was never overwhelming. He also had plenty of words of encouragement, knowing that when real problems with data arise, it could get discouraging:
Real-world datasets are noisy and heterogeneous, may have missing features, and may include data in a form that is difficult to map to a clean [n_samples, n_features] matrix. Before applying any of the methods discussed here, you must first extract these features from your data; there is no formula for how to do this that applies across all domains, and thus this is where you as a data scientist must exercise your own intuition and expertise.It's easy to tell that I much preferred this book over Python for Data Analysis, and I would recommend anyone looking into data science and machine learning take a look at the Python Data Science Handbook. It's a great overview of the subject, and you'll be able to get up and running with Python quickly, experimenting with some real applications of machine learning, and learning some of the critical issues of feature engineering and model validation.
Only the Beginning
These two books, Python for Data Analysis and Python Data Science Handbook, clearly only scratch the surface of machine learning. They teach you how to use the main Python libraries for data analysis and machine learning, but they don't go much further than that. There's a ton more stuff to learn about how to do machine learning well and what goes on under the hood in all of these various models. I've got my eye on more machine learning books like Python Machine Learning by Sebastian Raschka, Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron, and The Elements of Statistical Learning by Trevor Hastie, et al, among many others. There's a vast amount of literature out there now on machine learning, covering everything from practical applications to the theoretical underpinnings of the models. Suffice it to say, this is only the beginning of the exploration.
Sunday, 15 March 2020
Worst Or Useless Things In PUBG
While playing PUBG weapons, items and many kinds of equipment are required. Most of the items are useful and play an important role in getting a 'Winner winner chicken dinner'. But there are some useless items as well. We have given 8 useless/ worst items of PUBG as follows :
1. Clothes :
Clothes in PUBG are actually only meant for impressive outlook. Except for helmet and vests the other clothes have nothing with your win/ lose.
2. Vans :
PUBG has plenty of vehicles throughout the many different maps. Van, I consider, is the most useless vehicle. Because of its big size, its speed and vulnerability are limited. And no one would want to be slow in PUBG.
3. Quickdraw magazine :
Being able to reload faster is the only good thing about the quickdraw magazine. But most of the other attachments have multiple different uses and a lot of them are better than being able to reload faster.
4. Gas Can :
If you have a vehicle that is useful to your match then take a gas can. Or just leave it as it is basically useless.
5. Used vests and helmets :
An unused vest or helmet can last longer than a used one. It is also less effective. Neglect any used vests/helmet unless you absolutely need them.
6. Shotgun chokes :
Due to the increased range, the weapon's damage starts to degrade as the bullets are in flight. If you have multiple targets, the choke may not work as well as the shotgun without chokes should.
7. R1895 Pistol :
The worst pistol in PUBG is the R1895.
Putting a scope in it is not available. Also, it takes a longer time to reload than many other pistols. So avoiding it is better.
8. Bullet Loops :
The worst thing about them is that they decrease the base spread of a weapon. There are better and more useful attachments that have a variety of indispensable skills.
If you enjoyed, comment in the below and let us know. Also, do share the link
https://sudragamer.blogspot.com/?m=1
and follow us for interesting updates on PUBG.
Thursday, 5 March 2020
ATOM RPG: Post-apocalyptic Free Download
ATOM RPG - is a post-apocalyptic indie video game, inspired by classic CRPGs: Fallout, System Shock, Deus Ex, Baldur's Gate and many others. In 1986 both the Soviet Union and the Western Bloc were destroyed.
Project ATOM is a post-apocalyptic indie game, inspired by classic CRPGs: Fallout, System Shock, Deus Ex, Baldur's Gate and many others.In 1986 both the Soviet Union and the Western Bloc were destroyed in mutual nuclear bombings. You are one of the survivors of the nuclear Holocaust. Your mission - to explore the wild and wondrous world of the Soviet Wasteland. To earn your place under the sun. And to investigate a shadowy conspiracy, aimed at destroying all that is left of life on Earth. You are a member of the society called ATOM.
• Random Encounters with the dwellers of the old Wasteland both friendly and dangerous plus all at the same time.
• Even the smallest tasks in the game can lead to a big and intricate side story, Open some details about the world.
• Featuring: ATOM is a Powerful Character Creation tool, aimed at making the Wasteland hero you want to Portray.
• Numerous Encounters plus Side missions, hidden adventure-like Puzzles & Secrets scattered around the Wastes.
• Balanced roleplaying system inspired by GURPS & Each Stat combination provides a Unique Gaming Experience.
Game is updated to latest version
2. GAMEPLAY AND SCREENSHOTS
♢ Click or choose only one button below to download this game.
♢ View detailed instructions for downloading and installing the game here.
♢ Use 7-Zip to extract RAR, ZIP and ISO files. Install PowerISO to mount DAA files.
PASSWORD FOR THE GAME
Unlock with password: pcgamesrealm
4. INSTRUCTIONS FOR THIS GAME
➤ Download the game by clicking on the button link provided above.
➤ Download the game on the host site and turn off your Antivirus or Windows Defender to avoid errors.
➤ When the download process is finished, locate or go to that file.
➤ Open and extract the file by using 7-Zip, and run the installer as admin then install the game on your PC.
➤ Once the installation is complete, run the game's exe as admin and you can now play the game.
➤ Congratulations! You can now play this game for free on your PC.
➤ Note: If you like this video game, please buy it and support the developers of this game.Turn off or temporarily disable your Antivirus or Windows Defender to avoid false positive detections.
(Your PC must at least have the equivalent or higher specs in order to run this game.)
• Operating System: Microsoft Windows 7, Windows 8, Windows 8.1, Windows 10 | 64-bit
• Processor: Intel Core 2 Duo 1.8GHz, AMD Athlon X2 64 2.4GHz or faster
• Memory: at least 2GB System RAM
• Hard Disk Space: 6GB free HDD Space
• Video Card: NVIDIA GeForce GTX 260 or faster graphics for better gaming experience
Supported Language: English and Russian language are available and supported for this video game.• Processor: Intel Core 2 Duo 1.8GHz, AMD Athlon X2 64 2.4GHz or faster
• Memory: at least 2GB System RAM
• Hard Disk Space: 6GB free HDD Space
• Video Card: NVIDIA GeForce GTX 260 or faster graphics for better gaming experience
If you have any questions or encountered broken links, please do not hesitate to comment below. :D
Subscribe to:
Posts (Atom)