Thanks! The way you end up testing your application is of course partially based on personal or team preference. It's not bad practice to unit test your domain layer extensively, for example. At the same time, you could also argue that if you use the ports and adapters principle, fully testing your ports would suffice.
Any functionality that ends up being exposed, should be fully tested. You may however sacrifice test coverage, but that's a trade off I'd personally be willing to make.
As for the second part, yes - features can get dull to the point where they become writing a command or a query, a handler, and adding a repository method. Then when you're done with that, you write a handful of tests for the command you just wrote, and you're done. That gets real tedious, real fast.
However, assuming you're done a good job describing and identifying your application, the domain model, and the relevant events (e.g. through event storming) applying an onion-ish architecture could allow teams to be hugely productive, as they are able to implement functionality side-by-side, while still being familiar with the bigger picture of what they're building, and where they are going.
Perhaps you wouldn't use an architecture such as this when you're proving a concept - it would be too much red tape. But when you have a good idea of what you're building, either through iteration or through collaboration, I don't think you can go wrong with developing your application using this paradigm, or one that's like it (e.g. clean, hexagonal)
Hope that answers your questions.