Future Sun


A virtual queer rock boy band project

HomeCharacter ProfilesVocal Synthesis ProjectAI Ethics DisclaimerGallery

Let's get some things sorted out, okay?

FUTURE SUN AS A PROJECT DOES NOT USE ANY STOLEN DATA OR UNETHICAL AI MODELS. THE DATA THAT IS USED IS VERY HEAVILY RESTRICTED TO ENSURE THAT IT IS ONLY PROVIDED BY THOSE WHO HAVE SPECIFICALLY BEEN BROUGHT ON TO THE PROJECT, UNDERSTAND EXACTLY HOW IT WORKS, AND HAVE GIVEN EXPLICIT CONSENT TO HAVE THEIR DATA USED. ALSO, IT'S ONLY USED FOR THE VOICES (ALL ART, WRITING, MUSIC COMPOSITION, ETC. ARE STILL MADE BY HUMANS)

Okay, so y'all probably have some questions, so I'll answer some of the ones that I can think of.
Where/how is AI used in Future Sun as a project?
It is only used to take the data we provide to it into a usable voice model. We use a vocal synthesizer called DiffSinger, which you can read about here: https://vocalsynth.fandom.com/wiki/DiffSinger. It is used like how concatenative (non-AI) synthesizers are used, but AI is used just to make it smoother and give it more features, such as an option for adding autopitch tuning.
Do you really need to use AI for that?
No, but I've found the results to be easier to work with from a linguistics standpoint. Especially in the case of English, where it can have vowels that are more likely to change pronunciation depending on the surrounding phonetic pieces (ex. the vowel in "bad" vs. "band"). As someone who's had to work with English concatenative voicebanks since 2015, I can tell you with a ton of confidence that the current concatenative systems really don't work well for English as a language, so DiffSinger ended up being the best option (Idk how we survived without it at this point).
What kind of data do you use?
For DiffSinger to do its thing, it requires singing data, so the voice provider sings some songs, and that makes up the data set. For cross-lingual synthesis, we also use recordings that were originally intended for concatenative synthesis to get extra pronunciation data. As previously mentioned, all voice providers who are brought on board are directly told what data is needed, how it will be used, and what will result from it. From there, they must give their explicit consent to have their voice used for this project to be brought on board and have their data included. We're so strict about this, in fact, that if the voicebank will be used commercially, we only allow public domain songs or songs written in house for data. For non-commercial testing, we have covered copyrighted songs, but none of those make it into a finalized model release.
Are the art/writing/songs AI generated?
No. Those are 100% human made. DiffSinger is not an AI music generator, and it is prohibited to use Future Sun's characters, designs, voices, or names with any other AI stuff. That also means nothing related to cryptocurrency or non-fungible tokens (NFTs) is allowed at all. If you're using them, make your own art, write your own stuff, and don't be an AI/Crypto bro (because no one likes them).