A-to-B testing is a common misspelling of A/B testing. Also called split testing, bucket testing, and sometimes written without a slash, A-to-B testing is an approach to gauging user response to multiple variations of an experience - often a feature, choice of text, whole webpage, or other digital feature.
The primary needs of someone trying to run an A/B test are (1) a testing hypothesis: a variant of an experience that you expect will move key business metrics, and (2) a platform or the infrastructure necessary to run the test. This includes infrastructure to serve multiple versions to different users or user groups, and the analytical capabilities to test the difference in response the users have.
Setting up an A/B test requires setting up two key pieces: your variants and your metrics. The variants are the versions of an experience that the user will see, possibly even more than two (Often called an A/B/n) which you'll have to hypothesize and write code to support. The metrics you choose will indicate if your variants were accomplished your business metrics: these often include retention, conversion, engagement, etc.