by sussexpob » Fri May 25, 2018 10:33 am
This isnt isolated though. It seems this whole revamp is nothing but a false flag operation by Strauss and his superiors as a way to make it seem a lot more is going into the system than appears on the face of it. I have pretty much no doubt reading everything these people say on these matters, that they are hiding behind an elaborate deceit of making their job sound scientific and far more worthy that it actually is. It seems that, rather than put systems in place that work, we are consistently embedding bad practices simply to take away accountability from those people who administer the system. One only has to look at all the player pathway nonsense that Downton and Flower created, and which we were told Ed Smith was some once in a generation genius with, to see that.
I follow a lot of American Sports, and have read quite a few books on analytics and Sabremetrics, and I could pretty confidently say that it was never designed as a system in order to improve players. Flower particularly seems wrapped in this endless data capturing to define how players will react, their ability to learn and all this stuff, while Mo Bobat is obsessed with filtering in American practices in order to facilitate it. But these systems were introduced to take away from this type of approach. Analytics isnt really a coaching model, it is a GM model for putting together teams with the most efficiency. Ie, you arent measuring what a player CAN have, but DOES have, and how to fit that into creating a team.
I mean lets take an example. You might find that a crop of fast bowlers from the county game all have stand out performances and make the national development pool. For argument sake, they you have to pick 4 players out of 10, all from different counties. Almost certainly, Broad and Anderson are two of those bowlers, and almost certain to take the new ball. A sabremetrics type assessment would then try to look at the exact role of which the 3/4 bowler would have. Its about creating situational statistics and data that is relevant in order to assess exactly what you expect in a role specific model.
So, the next best county bowler averages lets say 20 a wicket. He gets picked as the third seamer. Yet, if you were to breakdown his county performances, you might find that he takes 50% of his wickets very cheap with the new ball. When asked to bowl after the 25 over mark, he averages 35. You have another who averages 34 per wicket over his career, but operates as a first change exclusively and bowls only with an old ball. Having created a new statistic to measure what you expect of the number 3 bowler, you actually find that the on paper hard case is that the presumed vastly inferior player is actually superior for the purpose. Yet, despite that data saying that, there seems to be this lack of understanding that the approach would not be to pick that guy who serves a better purpose, but to coach the first into being better. That is the England model. Its not used in American sports in this way. Its specifically focused on grouping skills at your disposal and running theoretical models to determine, when applying and creating data threads to work with, how to achieve maximum efficiency by applying actual metrics that have occurred. Not fantasy "could occurs".
Its pretty ironic, because England seem to be bizarrely committed to a system they dont understand the purpose for, but then also use contradictory principles to govern who they want to develop. Strauss the other day came out and said the new scouts will be looking for players who can bowl 90mp, spinners who rip it, bowlers who can reverse swing the ball, and look to bring these people into development setups.
Aside from the fact that its another obvious contradiction, because Strauss was keen to point out that its not all about data when elsewhere we are being told it pretty much is, this type of rating of potential stand out characteristics has been debated long in the world of baseball, but bizarrely a lack of understanding even in the professional game seems to sway many people back towards the traditional, old fashioned methods that analytics have largely started to disprove as being poor.
In baseball, the talk is always of the perfect "5 tool" prospect. Can he hit the ball, can he hit it hard, can he run fast, catch and have a powerful arm. A player who has all those prospects will guarantee himself a top billing in a draft, despite any contrary evidence of performance. Essentially, each player is given a score on these tools, and to be average or above average on all tools leaves you with a massive score. People still to this day grade prospects on those exclusively. Yet, people like Bill James will tell you (the man who ended the Red Sox 90 year wait to win a world series) that creating secondary assessments is of far more worth. If a player is terrible at hitting, but excellent at stealing bases, what does it matter that he cant hit? The net effect of a players worth only matters to how they perform, and any talents that contribute to that overall picture should be included. Traditional assessments like batting averages are argued to be defunct and way out of date.
As example, take three batters in cricket; Tendulkar, Vince and Jonty Rhodes. To give them a draft prospective, you would probably find that Vince would be best. A player with a crunching range of shots around the wicket, a better fielder and catcher than the great Indian, and rating as a "5 tool". Rhodes would probably score average batting and good fielding, Tendulkar in the above average but lacking 5 tools on his strength of arm and catching. Traditional batting averages would place Tendulkar on his own miles apart; so in draft terms, you might find Tendulkar and Vince coming up on scout assessments way into the higher picks at the top, with Rhodes being somewhere at the very bottom.
Yet, if you were to capture everything these people did on a cricket field and run it through a theoretical model on consistent performance, you might find that Jonty Rhodes was the most useful to a team. Tendulkar may score 15 runs per innings more, but travelling at speed past him more than a yard is going for runs, Rhodes is snaffling even aerial balls within a meter or two away from him. Tendulkar might drop a catch that should be taken, and cost his team a match losing score. Rhodes will pick something that had four written over it, and essentially act as a wicket taking fielder. Not that it might pan out that way, but I would have loved to see a secondary average rating for Jonty in his prime. You might find his mid 30s bat average translated to lower 50s when factoring just how many runs he was worth to a team.
So, it seems like very clouded thinking to use a hybrid approach. Whats the point of using analytics if the first filter is to essentially scrub those people out of the equation to whom analytics usage makes most sense? What you are doing is taking a very rough, non-performance based criteria which analytics itself would disprove as useful, then putting faith in the same system you disregard to tell you something about someone and their capacity to improve as to situations or occurrences where you might already have that skill set existing.
The net effect is, we then get a squad selected that displays that. Buttler is a "tool" player. Analytics would tell you that while his tools mean he has capability to smash an attack for a 100, he will do it far less than his use is worth. Yet, England will then be looking at this system to try to find his efficiency, while some county batsman skimmed out the system is averaging a 100 every time he comes in at 7 before his team have scored 200 runs.
For someone who was touted as a master of data and got the job on that basis as head selector, Buttler's selection is stone cold proof that he hasnt a clue what he is talking about. He makes no sense on past performance, on all ranges of performance, and from analytics.
So what are we to conclude? As I said, I will conclude that the system of player development, the academy, the player pathway, the directors, the selectors........ they have all shown, at every opportunity, that they literally havent a clue what they are doing. They dont understand the science they hide behind, they dont understand the theoretical models they purport to use. They have consistently failed to produce results, have consistently failed to justify their measures, and consistently failed to make any technical sense. All they do is to keep rehashing the same contradictory nonsense that makes the average person think "sh*t, they really know what they are talking about", while really the opposite is the case.
Strauss is in a way very lucky that his personal situation makes him safe in his job. He should be marched out the door, along with Flower, along with all the coaching staff, and lets sack of this joke selector now, because in one squad he has shown us exactly what measure of knowledge and skill he has.....Basically guesswork. Call it cricketing knowledge, as Smith and Strauss did...I call it guesswork. Kind of amazing isnt it. For all the talk about development and work ethic to improve, form, whatever.....
In the end we pick batsman who havent scored a 100 in 4 years. Says it all.
2010 French Open fantasy league guru 2010 Wimbledon fantasy league guru 2014 Masters golf fantasy guru 2015 Players Championship FL Guru 2016 Masters Golf Fantasy Guru
And a hat and bra to you too, my good sirs!