No. Instead of increasing quality, it might have the opposite effect.
First, it's important to note that Musing.io uses a manual curation system, not an algorithm-based voting system like Steemhunt.com. This matters because, while Steemhunt needs the user score to help judge the quality of a particular submission, Musing.io's manual curators don't need that. They can judge the quality of an answer or question just by reading it better than any algorithm. So, what will the purpose of the user score be?
Second, when it comes to a Q&A Platform that covers a variety of topics like Musing.io, a person's past performance on one topic will not necessarily capture his future contribution on a different topic.
For example, Let's say a person answered a question about politics. However, he is not an expert in politics, so he gave a mediocre answer. Let's assume he did this ten times. Now, if we award that person a user score, then it's only logical to give him a low score. After all, he repeatedly gave a mediocre answers. However, what if that person also happened to be an expert quantum physicist. And a week later, he came across a quantum physics question, that he decided to answer. Should his answer to the quantum physics question be penalized by his low user score? And should the answers of other users, those who have a high user score, but are not quantum physics experts, be given more weight? Of course it shouldn't. But with a user score system put in place, it's likely that the non-experts with a high user score will be given a more favorable review, than the expert who has a low user score. Now, imagine if another quantum physicist sees this. He'll likely think that Musing.io is a joke.
So, in this case, the user score will not likely increase the quality of questions and answers in the platform.
Third, let's also remember that once you start implementing user scores, voting algorithms, and other solutions to replace manual curation, you start opening the platform up to potential abuse. One of the features that really set Musing.io apart from the rest of the Steem DApps is its Manual Curation System. I'm not saying that this is a fool-proof system, but at least this system is more resilient to corruption. It's more difficult to game a system that's manned by humans, after all.
And finally, if Musing.io is to encourage genuine curation, one that's not based on auto-votes, then the system has to make sure that its users judge questions and answers based on it's content, and not on some pre-defined number like a user score. After all, we don't want the platform to become one big circle-jerk where curators just look for the accounts with the highest user score and vote their answers.
So, I don't think that implementing a user score will benefit Musing.io, at all. It will be more prone to abuse and less accurate, when compared to manual curators. Plus, there's really no need for one, at moment. So, let's leave it to the humans, for now.
Hope this helps.
Yes I think it would be great to see some form of user score.
People generally like working towards goals / powering up so it could help increase usage and help retention. Sorting questions by user score could also be a useful filter to find great questions.
Ideas of items that I think could be used to contribute to the score: