Mehta

I'm hoping to keep this page fairly informal, there is a lot to cover here and being precise everywhere will just make this a ramble. This is me trying to explain how I "derived" the method and what "theory" went behind it (though the theory is just simple math).

Consider our typical 2-alg method CFOP. It has 58 OLL cases and 22 PLL cases including solved. So, we should be able to solve 58x22=1276 states of the cube using these algorithms. However, due to the possible U moves before OLL, between OLL/PLL, and after PLL; and the 6 possible last layers a CN solver can have, we can actually solve 373248 cubestates using these 58+22 cases we know. So, we are exploiting symmetry (AUF is sort of a y axis symmetry) to do a lot of the work for us. This symmetry here solves an additional factor of 373248/1276 = 292.5 cases at the cost of doing U moves at worst. This is an important factor since if we want to solve the cube as efficiently as possible (per alg we know), we should try to maximise this factor. This factor sort of quantifies the amount of symmetry utilized by a method.

Now of course, if we have 3 algorithmic steps, we could potentially exploit more symmetries of the cube. Again consider CFOP, except even the last pair is an algorithm. There are 42 cases for this step. The additional symmetry is there are 4 possible last slots for a cross colour. The number of cubestates with LSLL remaining is roughly 224M (~5!x5!x16x81x24/2 to the first order). This gives the symmetry factor to be 224M/42/58/22 = ~4200.

Can we do any better without changing the number of algorithmic steps or increasing the number of algorithms much? There are quite a few ways you could think of, but its no use if the recognition is bad, or the algorithms are bad. For recognition to be good, lets say we limit the number of faces on which any action is going on, to 2 (say everything unsolved confined to the R and U layers). Now you should be able to make an exhaustive list of all the possibilities can there are. Consider Mehta-6CP. It has 3 steps with number of cases being 72, 48, and 17 (total 137, roughly the same as 122 of CFOP). The number of cubestates where the required pieces are remaining is roughly 6!x5!x243x24/2=251M. This gives the amount of symmetry exploited to be 251M/72/48/17= ~4300. This is not much better.

But then you bring the D layer offset. Since the previous steps do not require us to have the E slice in any particular state when solved (the 3 algs do not affect the E slice), we can have the D layer offset the way we want, we can simply mend it with a single ADF. This does not affect the recognition of the preceeding steps at all. Basically, the D layer need not be solved w.r.t. the E slice, increasing the number of cubestates by a factor of 4. This gives us a symmetry factor over 17k! But what about the D offset in CFOP? It in-fact already is a thing, it is called pseudo-F2L. And so far at least, we see that the recognition of pF2L is pretty hard, so including each D-layer offset with equal likelihood (perfectly unbiased pF2L) does not seem very practical.

Of course, none of this matters till the algorithms of the method that exploits symmetry better are not much worse in recognition and execution. This is what happened with ZZ-CT. ZZ-CT had 105 and 72 cases, instead of 20x4+1 and 495 cases in F2L+ZBLL. Since there was only one possible LS, the number of states for a single-line solver was 5!x5!x81/2=583200 cubestates, which gives a symmetry factor of ~77. For a single-line solver using normal ZZ-a, there are 2.33M cubestates, which gives a symmetry factor of ~58. These are similar numbers, but TSLE algorithms were longer than and harder to recognise than F2L algs on average, and TTLL algorithms are arguably worse compared to ZBLL algorithms. On top of that, some may argue that not having complete freedom in pair selection during F2L hinders lookahead. All these problems compensate for the slight increase in symmetry exploitation and halving of the algorithms, so we cannot really say for sure that ZZ-CT is better than normal ZZ-a (in fact in my knowledge, most argue it is worse).

On the other hand, Mehta-6CP algorithms in comparison are of similar execution speed to CFOP (9+10+8=27 moves for Mehta instead of 8+10+12=30 moves for CFOP, Mehta L5EP has M moves while F2L may have rotations, both have reasonably fingertrickable movesets, etc.) and recognition seems as good for the method. Also, the symmetry factor is not a few percent, but over 4 times that of CFOP. Basically, more cubestates are solved using Mehta algset than CFOP algset despite having similar execution, similar recognition, and similar number of algorithms.

Why is ability to solve more cubestates good? Here, it switches cross+3 (typically 25-30 moves, usually with additional rotations) with pseudo-EO-ledge (~20 moves). Not to mention, Mehta-6CP is just a single path of an entire option-select system.

This was basically all of my methodology of coming up with the method and reasoning that it has potential.

Movecount: 45-50

Algorithms: 130 - 843

The method was born in early-to-mid August of 2020, and here is the proposal and development thread on the Speedsolving forum and here is the wiki page. Here and here are Reddit posts about the method, and here is a Facebook post on the method on the page Cyoubx's Friends. Here, here and here are the first few Youtube videos on the topic (which are also great links to get started with the beginners' version). Here is the leaderboard of users of this method. Finally, here is the Discord group for the method where most of the latter developments and optimizations took place. As will be evident in these links, the method vastly evolved over the first few months as more and more individuals joined the pursuit of making it a contendor for the big-4.

The method wouldn't have achieved in years what it did in just a few months, if it was not for the efforts of Vincent Trang, Andreas Olvera, Ethan Davis, Matthew Hinton, Liam Highducheck, and countless other users, supporters and critics. All the way from coming up with ingenious improvements to painstakingly generating, refining and organising algorithms, to running and analysing simulations, it must be recognised that it took immense efforts from dozens of individuals from across 4 continents to develop and popularise the method to this point.

Developing a new speed-solving method in the cubing community is extremely difficult due to the constant scrutiny and comparisons with other methods that have been in development for decades, and the exhausting ever-lasting debating and defending of the quality, originality and potential of the idea. This is not necessarily a bad thing, but even decent methods get thrashed and the most developers lose motivation to continue with the idea. While Mehta might be special, it cannot be denied that one of the major reasons why Mehta could endure this pressure was the constant effort put into popularising the method alongside the development, that led to more developers jumping in and quickly ironing out many kinks that the original proposal had. I wanted to dedicate this entire paragraph to acknowledge that method popularisation is an important part of method development, and while I am not particularly fond this, I do not have better alternatives for the community to function.

That said, Mehta has stood strong through this phase which barely any speedsolving method passes in the current era of cubing, and that can only be taken as a positive. Now while in theory it seems to have the potential to beat most of the currently used methods, what remains to be seen is how it does in practice. The only way to do this is to put the method out to the world and let the entire speedcubing community test it out for themselves. If you wish to chip in and make tutorials, streams, vlogs, comparisons, etc. in any capacity, I would be more than happy to provide any assistance I can. Signing off in the hope that we see World-Class times with the method some day.

- Yash Mehta