In the end it's just a definition. There is no right or wrong in this. It's just a convention that people need to agree to. And some people don't and some people do.
Personally, I think that adopting the convention $0^0=1$ far outweighs its negative issues. It's very inconvenient to write summation symbols taking the $0^0$ case into account.
There is this joke about scientists applying to be the referee of a baseball match. There was a situation were it was unclear whether the player was in or out. The physicist started doing various calculation of the player's speed and the resistive medium of sand. After a long time he said the player was out. The mathematician on the other hand immediately said the player was out. They asked him how he could be so sure. He replied that the player was out because he said he was out.
This is the idea behind mathematics. Mathematics needs to be internally consistent, but is not discovering something actually true. So whether $0^0= 1$ is not something that can be empirically verified at all. It just needs to be internally consistent: whatever you choose, be consistent and keep adopting the standard you choose. And of course when communicating to others, be very clear what standard you adopt. This applies of course to all of math, if you wish to work in a natural number system where $1+1=3$, you can do this if you are clear about it and if you are consistent. But that system is not going to be very useful. The $0^0=1$ rule however IS useful and has benefits.
Lastly, I might want to put a different light on the issue. Maybe the $0^0=1$ thing is a type issue. We often identify the $0$ of the naturals with the $0$ of the reals, for good reasons. But pure formally they are not the same. We construct the naturals directly from sets. Then we introduce the integers as pairs with an equivalence relation. We do the same for rationals, and then we use Dedekind cuts to make a version of the reals. If we adopt this practice, then the naturals, the integers, the rationals and the reals are all very distinct sets which have typically no element in common. So the $0$ of the naturals is very different from the $0$ of the reals. (This issue can be circumvented by adopting the reals as the standard set and then finding the naturals inside it).
Where am I going with this? Well, I think the value of $0^0=1$ might depend on whether the exponent is natural or real. Indeed, if we work solely with natural numbers $n$, then $x^n$ should be $1$ every time if $x=0$ and $n=0$. But if the exponent $n$ can take on every real value, then $x^n$ might be better undefined. More formally, we have a monoid-action $\mathbb{N}\times \mathbb{R}\rightarrow \mathbb{R}: (n,x)\rightarrow x^n$, and another monoid action $\mathbb{R}^+\times \mathbb{R}\rightarrow \mathbb{R}: (n,x)\rightarrow x^n$. But perhaps for the first monoid action, we want to define $x^n = 0$, while for the second we won't. So the second monoid action is not an extension of the first. This is a very crazy state of affairs, but it makes sense to me: if we take a Taylor expansion, we have
$$e^x = \sum_{n=0}^{+\infty} \frac{x^n}{n!}$$
we want $0^0 = 1$ in order for this formula to hold everywhere. But this is ok since $n$ only takes on natural values, so we can use the action $\mathbb{N}\times \mathbb{R}\rightarrow \mathbb{R}: (n,x)\rightarrow x^n$.
On the other hand, if we investigate the function $(x,y)\rightarrow x^y$, then the exponent should take on more than natural values, so we should leave it undefined.
In fact ALL the benefits of $0^0=1$ only occur if the exponent is natural only. So this kind of typing does make sense I think.
In any case, all of this is very unclear to me. It is clear how $\mathbb{N}$ is a category and how the categorical operations force $0^0=1$. My goal is to see $\mathbb{R}$ as a category too and perhaps to obtain that $0^0$ should be undefined in this category. this would be the optimal solution for me. But I'm not there yet.