There's a lot of bad information here so let's cover the basics.
1. You do NOT need to enable promiscuous mode and forged transmits for ALL vSwitches on the ESX hypervisor to allow the MM to work. If promiscuous mode and forged transmits bother you, you just create a unique port group with promiscuous mode and forged transmits enabled, while the vSwitch that contains that port group has those settings disabled, and you put the VMM into that port group, and things work fine. This is a common requirement for *any* virtualized switch or router and is not unique or nasty or dangerous, and doesn't carry any more risk than anything else deployed when done properly.
Note as well, these requirements (promiscuous mode and forged transmits) will likely *never* change and will always be required whenever you virtualize a switching product like our controllers.
2. You do not need to trunk up all VLANs to the MM, the MM only need to be routable from and to the controllers the MM is managing. So the MM in effect only needs one VLAN/network minimum. Now, you CAN trunk up multiple VLANs to the MM depending on your use case, etc. But otherwise, it only need one IP and one VLAN (or two IPs if you are doing VRRP).
3. While HMM and VMM offer equivalent capabilities, the VMM has a significant cost savings associated with it. To support 1000 APs with redundant MMs, you only need to purchase one MM-VA-1K license and you can deploy up to four VMMs (2xL2 with L3 between DCs). If you want redundancy with Hardware MM, you have to purchase two MM-HW-1K appliances to support 1,000 APs (or if you want to have four HMMs for 2xL2 with L3 between DCs), then you are paying 4x as much. So VMM *is* the better way to go.
4. LACP is never required for any Aruba product, but some people like to. For VMM, LACP is involved when your ESX hypervisor's vSwitch is connected to multiple NICs. VMWare tries to make it easy with NIC Teaming, however, virtual switches with firewalls won't just accept frames from one or the other, so you just have to specify that that vSwitch's multiple NICs be configured between the hypervisor and uplink switch with LACP (instead of NIC teaming with no config on the upstream switch). This is also common when carrying virtualized switching products across a vSwitch with multiple physical uplinks from the hypervisor, and is more efficient.
5. In no way should the use of L2 VRRP create any kind of 'storm' on your network. We have VRRP L2 redundancy deployed on networks with 10k APs + carrying over 100k concurrent client connections without issue. L2 VRRP is the primary mechanism for controller clustering as well. This is not a risk unless it's done incorrectly (which then could cause issues sure, if you use the same VIP ID and passphrase for multiple devices instead of a unique VRRP ID and passphrase for each paired devices).
6. The VMM deployes with three virtual interfaces/network adapters. 'Network Adapter 1' maps to the OOB management interface. 'Network Adapter 2' maps to gig0/0/0, and 'Network Adapter 3' maps to gig0/0/1. You can certainly deploy on a single interface (I have multiple MMs on my lab on the same VMware hypervisor on the same vSwitches and all of them are all only using 'Network Adapter 2 mapped to gig0/0/0'. You just disable the network adapters you don't want to use, and don't assign them to any vSwitch (or put them in a null vswitch if it makes you feel better) and just leave one of the gig0/0/0-1 interfaces in whatever vSwitch/port group you want.
7. Most entities that use or decide on the Hardware MM is because a) their security policy requires physical appliance (think government, military, etc), or b) the network team doesn't work well/trust the virtual infrastructure team, or c) they just prefer hardware appliances regardless of cost.
If you are a partner, you should have access to whatever training docs you need. As a customer, I agree we could do better with some kind of 'intro/hand-holding how-to-deploy' cookbooks and it's being worked on. But this is a good place to start and there's plenty of people here that can help. However, I will add that the vast majority of your issues and concerns really should not be an issue. promiscuous mode/forged transmits are a requirement. If that is a deal breaker and you just annot do it, even when enabled only in the port group, then hardware is your ONLY option. If you cannot figure out how to run VRRP, that will be an issue regardless of hardware or virtual since we use L2 VRRP for not only MM redundancy, but also cluster redundancy among controllers. So long as your VRRP IDs and passphrases are unique, there should never be any risk of a 'storm' or interruption of service (unless you have some oddity on your network where something is intercepting or bloccking VRRP broadcasts).