They don't need the same feature list.
It seems similar to a Garage Band type of software, aiming to entice people with little audio production experience and give them an interesting sounding snippet they can play back to friends.
For example, the only actual audio editing they displayed was slicing and re-pitching (you can't even choose the time-stretch algorithm), which is conceptually very simple to understand.
There's no ability to actually edit dynamics or do very accurate frequency adjustments that I can see from the demos, so it's basically useless for anything I would want to do.
What I mean is that in a DAW you have a lots of tools that don't make sense in AI context.
Like for example, people who use agentic workflows don't need a Visual Studio license.
For advertising jingles it makes some sense. For artistic expression, like... what's the point?
I used to release music that people listend to, but not anymore, now the only joy comes from making it for myself. Am I still an artist?
If the act of creating this AI music provides joy to some, they should do it. I just have a hard time understanding that.
If the MtG card collector is not an artist does that mean they're bad and need to stop?
I think they are doing art.
If the main reason is to keep their items clean, as much time as they use doing the composition or how good it looks, they are not artist.
That + latency with MIDI devices is why every DAW-in-a-browser is just a toy.
I somehow doubt a full-blown browser connected to more than a couple of VSTs would be less of a resource hog than doing the same in a DAW. On your computer. That you own. In your house. Without like additional 50ms of latency for the data to travel to the server and back.
As horrible as it sounds, a VST is just a .dll file you're running straight from the Internet. On a "positive" note, they're backwards-compatible with like Windows Vista!
What does that mean? It means that your compositions (outside of bouncing them down to audio stems) exists within a highly proprietary SaaS format and that the moment you stop paying, you've got NOTHING.
99% of major DAWs (Ableton, Logic, FL Studio, Bitwig, Studio One, etc.) are a perpetual license.
Much like those of us hammering away at LLMs who eventually get incredible results through persistence, people are doing the same with these other AI tools, creating in an entirely new way.
I'm sure Suno are working hard on this and these AI tools can only come together as fast as we can figure out the UX for all this stuff, but I'm holding out for when I can guide the music with specific melodies using voice or midi.
For "conventional" musicians, we (or at least I) would love to have that level of control. Often we know exactly what it should sound like, but might not have session musicians or expensive VSTs (or patience) on hand to get exactly the sound we want. Currently we make do with what we have - but this tech could allow many to take their existing productions to the next level.
But when they say it can replace Pop music I can only laugh. It is the most boring early 2000 RnB ever created and it souns thin.
Any Aphex Twin model out there?
The visual stuff also helps to make it more powerfull and cohesive.
The bad part is that it wanders a lot to get nowhere and it does not create a climax that bridges with the second part. The same sounds and ambient with a producer behind that creates an arragment for it would be much more powerful.
Sunscreen: https://youtu.be/VBaWtOHPTZw
Purple Sunset Over Lake 2: https://youtu.be/lD7rSxPncs4
Maybe you don't owe AI audio creations better, but you owe this community better if you're participating in it.
This is basically what all the Suno creations sound like to me, which is to say they definitely have a market, but that market isn't for people who have a more than average interest in music.
Is it snobby of me to look down upon art that is created using these tools as lesser because the human did not make every tiny decision going into a peice? That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Something about art with imperfections still feels exciting, maybe even more so than if I see something that is perfect but if I see an AI gen picture with 6 fingers, I just write it all off as slop.
I am happy to allow my generated code to come from “training data” but I see the use of AI in art, writing and music as using stolen artists hard work.
I feel like as time goes on, I feel even more conflicted about it all.
> That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Human output isn't sacred. yes this is snobbery, a useless feeling of superiority.
You either adapt or go hungry just like everybody else and art shouldn't be exempt from the mechanics of supply and demand.
Take, for example, a track by Fontaines D.C., a band from Ireland that writes extensively about the lived social and political experience. Knowing where they are from and the general themes of their work makes their tracks feel authentic, and you can appreciate the worldview they have and the time spent producing the art, even if it does not align with your own tastes.
Trying to create something of the same themes and quality from a prompt of “make me an Irish pop rock track about growing up in the country” suddenly misses any authenticity.
Maybe this is what I am trying to get at, but like I said, I feel some conflict about this, as I personally value these tools for productivity
Yes. But aesthetic taste and snobbery usually go hand in hand.
This is the first time I'm actually paying for generated AI content because the value I get is immense. I really think we are headed towards and over supply of content where there will be more stuff to read, watch, listen with very real value in all of them.
This spells out the inevitable change in the labor market for content creators. There will always be value for human created content and some will make more money but it will always have the AI generated content generation competing with it to the point where it will be hard to stay ahead and eventually people will stop caring.
Case in point, I see some comments being snarkish towards Suno but for as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today and now especially when there is virtually no difference in quality.
Truly an amazing accomplishment from Suno team, and probably the first time I've subbed to a music service after decades of downloading mp3s, hunting down new songs to listen to on Youtube. Suno 'steamified" this process and while I will use youtube to discover new genre, I am spending now most of my time in Suno, listening to endless amount of the exact sound I am looking for.
I just haven't heard anything that isn't "slopful" yet. If I do, I will still feel weird about it, but I'm a big believer in the value of "aesthetic objects in themselves", so I am eager to find something I do actually like.
Even just knowing something was drawn or composed by an AI will negatively taint my opinion from the start, but I'm still open.
The problem with AI music is that is just sounds like shit.
I don't totally discount the position that the human "soul" is what makes art art and all that, but I still do think something can be very enjoyable and good without being created by a sentient entity, in theory.
You'll notice many similarities in instrumentation, but how is Suno not like a bad RealAudio take on some of these noises haphazardly lumped together?
Or, same artist, different track: https://www.youtube.com/watch?v=UhvpCHfe0m0
Don't you need more focus and aggression to make even sell-out weak tea dubstep? I feel the generative process really severely fails to deliver anywhere near the correct sound, even for 'bad artificial lol dubstep' sounds.
Another even closer to the intent of the Suno one: https://www.youtube.com/watch?v=G3q_kmpq-9Y
I was very impressed with v4.5+ that I could get quite good songs evocative of Devo, Yeah Yeah Yeah's, Metric, etc.
Version 5 is currently harder (or I haven't figured out a way) to generate this kind of chopped/produced sound. It doesn't follow complex style definitions and tends to generate songs that are too slow and "smoothed" over.
a quantity over quality argument with regard to art is wild.
We're already here with human created content.
as a fellow consumer I care a lot actually
> my favorite genre is new jack swing
My friend, where do you think your favorite genre that AI is now parroting comes from…
Music is a uniquely interesting case, since music has a much lower barrier of entry to consume.
For example, I had never heard epic power metal about birds, but with Suno I got exactly what I wanted. Sure, the sound quality (I only used v3.5) could be better and the songs could be longer, but I don’t care, I now have epic songs about my Bourke’s parakeet. However, I’m not pretentious enough to think those songs are interesting to anyone other than my wife and me, hence the smallness of the bubble.
Generating ‘content’ tailored to you and not meant for someone else’s taste.
Human artists need to make money and those who create music for a tiny bubble probably can’t make enough.
So as an artist what do you do? Do you have to create music with mass market appeal from the beginning?
Or do you need to bank on luck that your music for ‘small bubbles’ gets discovered?
Or you have to have clever marketing strategies to get your music in front of more ears to hopefully gain more fans. And create merch, tour etc.
I wonder how all this AI music is going to impact indie artists. Spotify and the likes is just ripping them off and on top of that their music is / has been stolen from these AI data gobblers.
I don’t see how at this stage it can replace human expression though (singing, playing violin, piano, etc) which is very nuanced.
Same with acting… nuanced expressions that matter. I’m not sure AI can replicate the acting skills of Denise Gough (Dedra from Andor) for example… and many others.
But it would be awesome to generate more story lines or episodes from your favourite TV shows, for example shows from over 20 years ago.
Imagine being able to create more episodes of Star Trek TNG or DS9, maintaining the feel of that era without letting someone like Kurtzmann ruin and tell you how new Star Trek should be.
But how do you ensure actors, writers and other creatives from that show will be compensated directly?
Or maybe this will only be possible in a Star Trek like world, where profit uber alles is not the focus anymore.
It either needs to be: 1. So easy anyone can press a button and magically get exactly what they want with perfect accuracy and quality. 2. So robust and powerful it enables new kinds of music production and super-charges human producers.
This is neither. And I don't buy Suno's argument that they're solving a real problem here. Creative people don't hate the process of creating art-- it's the process itself and the personal expression that make it worthwhile. And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
But then you look at image gen. The established one, namely Adobe, are surprisingly not winning the AI race.
Then you look at code gen. The established IDEs are doing even worse.
I don't rule out the possibility of music being truly special, but the idea of "established tools can just easily integrate AI right" isn't universally true.
I'd argue music generation is different from image or code generation. It's closer to being purely art. Take image generation for example. Most of the disruption is coming from competition with graphic design, marketing, creative/production processes, etc. The art world isn't up in arms about AI "art" competing with human art.
What Adobe and others ought to be doing is setting up internal labs that have free reign to explore whatever ideas they want, with no barriers or formality. I doubt any of them will do that.
Don't forget the secret third option - facilitate a tidal wave of empty-calorie content which saturates every avenue for discovery and "wins" purely by drowning everything else out through sheer volume. We're at the point where some genAI companies are all but admitting that's their goal.
https://www.hollywoodreporter.com/business/digital/ai-podcas...
That way I get new musical ideas from Suno but without any trace of Suno in the final output. Suno's output, even with the v5 model, is never quite what I want anyway so this way makes most sense to me. Also it means there's no Suno audio watermarking in the final product.
It shouldn't be a magic button that does everything for you, removing the human element. A human consciously making decisions with intent, informed by life experience, to share a particular perspective, is what makes art art.
Most success as a musician stems from developing a unique style, having a unique timbre, and/or writing creative lyrics. Whether a coder, designer, artist, or musician, the best creatives start by practicing the patterns of those who came before. But most will never stand out and just follow existing patterns.
AI is nothing more than mixing together existing patterns, so it's not necessarily bad. Some people just want to play around and get a result. Others will want to learn from it to find their own thing. Either way works.
I mean, I hate when it's difficult to get the medium to express my vision... not that AI especially would help with that when I'm actually attached to that vision in detail....
Yep. I was a professional music producer before the pandemics, and I couldn't agree more.
Honestly, I'm glad we are destroying every way possible to earn money with music, so we find another profession for that purpose and then we can make music for fun and love again.
Respectfully I disagree. We have had curated, manufactured pop, built by committee and sung by pretty mouthpieces with no emotional connection, for a long time now, and they make big money.
And look at the vocaloid stuff too.
Those who care, care. Everyone else?
What about the vocaloid stuff?
It’s a counterpoint to the above argument that listeners will be dismissive of AI-produced music because it is a pale imitation of art created with intent and soul. On the contrary, such music thrives and is very popular already.
A particular piece of art isn't "soulless" just because it didn't move you. There were still plenty of humans involved in making it, who made specific artistic decisions. In pop music, the creative decisions are often driven by a desire to be as broadly appealing as possible. That's not a good or bad thing unless you judge it as such. It's still art.
That’s hilarious.
I’m not saying it’s not ‘art’ whatever that might mean, I am saying this idea that people won’t accept and enjoy an AI version is a fantasy.
Strong disagree there. I think that's true of a very small % of consumers nowadays. I mean, total honesty, I think that Suno is not worse than a large fraction of the commercial pop made by humans (maybe) that tops the charts regularly. It's already extremely formula based artificial music made by professional hit makers from Sweden or Korea.
The objective was never to grab discerning listeners but the mass of people. It would work even if they grab 50% but honestly I think it's going to be higher.
Um, have you seen the pop charts at any time in the past... well, since forever, actually?
The majority of commercially produced music today is created with intent to take your money and nothing else, with performers little more than actors lip-syncing to the same tired beat. Because it sells.
they can be (albeit web-based) the "davinci resolve" of DAW, regardless of whether the AI features be bundled away for the paid plans.
For voice removal I use Ultimate Vocal Track Remover, is on Github.
If you want to test it, here's the link: https://www.submithub.com/ai-song-checker
Looks like the "covers" need some better instrument isolation, but this is really huge for the music industry.
Yup totally won't mess with their algorithms.
LOL oh hell no! Why would anyone use this if a perpetual subscription is required to maintain the rights? Absurd.
> If you made your songs while subscribed to a Pro or Premier plan, those songs are covered by a commercial use license.
More info here: https://help.suno.com/en/articles/2410177
Suno can create catchy songs and succeed in matching genre expectations / cliches.
I've been in phases where I had output I generated with it playing in my head constantly (due to repeated listening).
The output was catchy.
Then tried to generate interesting music, failed spectacularly.
And I, among other stuff, enjoy a lot of music that people consider formulaic, abstract or straight-up boring.
What's missing in AI "art" is intent and well... creativity.
I think it will have a disrupting influence on commercial pop culture, no question.
I also wouldn't claim to be able to classify correctly whether something is AI output.
But art is something entirely different.
You can upload music and let suno arrange it in different styles. I'm a musician myself and am also interested in "interesting" music. I made experiments with my own music and was positively surprised by the "musicality" and creativity of the generated arrangements (https://rochus-keller.ch/?p=1350).
I'm not going to claim AI audio isn't also awash with popular themes and tropes, or that it's a bastion of creativity. I'm also not going to claim that the deepest, really creative ideas aren't expressed in human written works. There are enough people to make truly exceptional songs and prompt many truly mindless AI generations. And there's also nothing wrong with most songs optimizing for personal preferences that are not that; I'm not trying to 'argue against' popular music.
But I am going to claim, for me, that it just hasn't been practical to saturate my tastes from public media, and that most of the reason I personally listen to AI music is that I want something that says or does something I think is creative, exploratory, or intellectually interesting that I don't know how to get from anywhere else.
It's like, sure you can want things from music that are to your specific taste, but it's like coming into a post about, idk, a folk band and complaining that it's not metal. You're allowed to like your thing, but clearly most music is allowed not to be metal, why is this music specifically bad for not being metal?
And in this case the point I'm making is stronger, in that AI audio actually unlocks a lot of ability to listen things that are ‘interesting and creative’ but not widely available because of consumer preference, so it's actually more like showing up to a folk metal fusion band and saying the problem with this band is that it isn't metal.
Somehow it's assumed that artists make music for the audience, but many make it for themselves, because they enjoy the process.
Contrary to other comments in this thread, typing prompts on a keyboard is not the same as picking up a guitar and playing it.
>Intent is not a lofty concept, it's at the heart of what art is.
Weird. That's another phrase I don't see in the post.
>You're allowed to like your thing
Massively generous of you, thanks.
Culture is fluid. Music is about exploring the boundaries of what sounds good, often because of feelings. Related to the society in which the music is "consumed".
AI music is a commodity and generally uninteresting, like artists who only imitate styles.
But just like annoying over-commercialized music that only tries to scratch existing itches and match expectations, it can still work to a degree.
Intent is not a lofty concept, it's at the heart of what art is.
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
The way you describe AI ('continuous stream of auditory stimuli') is the way I'd describe Spotify. Sure, you could use AI to make a faux Spotify, but, like, why would you? The popular stuff already has saturating supply, and it will sound much better than an AI generation.
Regarding this:
> I'm saying I use AI generations for exactly the opposite — so that I can explore and listen to things that are more intellectually interesting in the ways I find intellectually interesting
I just have not found any AI music that would satisfy this description. But I am very interested in failure modes of GenAI. Especially in Suno, it was cracking me up at times.
I'm also sure there will be a space for interesting and/or challenging music generated with neural networks involved.
But I don't see any revolution here so far.
Care to share examples of AI-assisted music you find interesting? To elaborate, I don't find jarring or curious combinations of cliches interesting.
AI could not invent a new style, it seems to me. To repeat this point.
And I've never had any problem finding interesting music.
Key to me is diving into labels, artists and their philosophy, after I got interested into particular ones (the other way around doesn't work for me).
I adore discogs.com for that. Regarding interviews and stuff, there's sadly a huge decline in quality written material about music, I feel.
"Lowest-common-denominator music" is exactly what Suno produces, at least in my ears.
I could go on and list music I like, but generally avoid that.
Wait, I'll do it anyway for a bit... at the moment, I like
Punctum - Remote Sensing EP
(Caterina Barbieri)
and
AtomTM vs Pete Namlook - Jet chamber LP
just for example
I also love so much other music.
To me, such music is miles apart from the slop I heard from AI.
I heard there's research into generating music in the style of JS Bach as well. How's that going?
I'd guess: probably bot too well, because the genius of Bach is not only in complexity, or counterpoint rules.
His music is very emotional to me (at least the portions I like).
And, like any good music, it has moments of surprise. It's not just a formula, or a "vibe", or a "genre".
Could AI create a new Techno, a new Blues, a new Bossa Nova?
I doubt it.
I will also repeat that I'm well aware that the best stuff is definitely all human. It's not my genre either, but traditional composers like Bach certainly made extremely interesting, clever, even deeply-studiable pieces and AI 'in the style of' those composers surely won't capture much of that. There's a lot of stuff AI can't do wholesale; one particularly strong example is if you're Jacob Collier, AI is not going to make the complex harmonizations and song structures there.
AI is pretty bad at these textural or instrument exploration things like from Collier above or Mike Dawes or Yosi Horikawa or Yoko Kanno or Keiichi Okabe. There's a bunch of music I listen to because it's generically a genre or mood I like and it's well produced, which I won't list here, and AI audio can often do stuff like that at baseline but not especially well. There's also nostalgia; I'm also certain a huge part of the reason I like the Celeste soundtrack so much is in part that I liked the game so much.
But then there's a whole category of music I listen to where the texture is supplemental to the part that defines it, like most of Acapella Science or Bug Hunter or Tom Lehrer. Eg. Prisencolinensinainciusol isn't interesting to me because it's musically complex; the part I care about is that it's a listenable execution of an idea, not precisely how it was executed on. I don't keep coming back to I Will Derive by some random schoolkids recorded on a potato 17 years ago annually because it's sung well or they were particularly clever with how they took another song and changed the words; I come back to it because it's fun and reflects for me onto a part of my past that I remember fondly, and these things make me happy.
All these words and I've still only addressed half the comment. Ok, let's consider the idea that it's not enough for AI audio to facilitate the creation of interesting musical pieces, and it instead has to create whole interesting musical styles. I take issue with this in a bunch of places. I don't reject artists who I judge not likely able to create a new Bossa Nova. I judge artists based on whether the output they produce is something I want. I do the same for AI.
I also think the question about whether AI could 'create' a new style is somewhat misplaced. A style is a cultural centroid, not just a piece of audio. AI can definitely create new musical textures or motifs, but it's always being pulled towards the form of what it's being asked to produce. As long as we're talking about systems that work like today's systems, the question still needs to involve the people that are selecting for the outputs they want. Could that connected system create something as distinct and audibly novel as a new genre? Yeah, probably, given time and a chance for things to settle. That's a different question from whether it'll do so to an inspecific prompt thrown at it.
Well, I kind of borrowed this wording from the description of your genAI experiences ("intellectually interesting").
I feel that it's nod a bad word to describe qualities of music, although it's a bit nondescript. Sure, music can be interesting, but still unpleasant etc.
But "interesting" means (to me) that the music makes you want to listen to it again.
Music doesn't need to sound pleasant. Or angry. Or sad. Or "abstract" (an oxymoron when it comes to describing a sound, still widely used).
Music is communication.
And just like it's a novelty and sometimes useful or entertaining to use ChatGPT, it's a novelty and sometimes interesting to use Suno.
That's pretty much it for me.
The magic of prompt => non-text media is also interesting, sure.
But not interesting anymore to me as art, at least not without being part of a bigger whole.
Good early example for this would be "Headache - The head hurts, but the heart knows the truth"
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
That being said, there is a spectrum, sure.
I am interested in generative music.
I do have scenarios where I listen to music and want it to blend into the background (e.g. soma FM).
But even then I love the short moment when a song comes up and I want to note it because it's distinct.
I am not interested in being robbed of that.
Also: why? Why, why, why?
Music is not just a recording packaged as a product. It is a thing humans do. And I say that as a person that enjoys mainly electronic music!
There are many talented humans, there is absolutely zero need for AI muzak, other than decreasing the price.
Musicians leveraging generative AI for creative purposes might become a thing and I am fine with that in principle, but the thought is a joke to me, as of now.
Creating audio from an idea is not the same as letting a machine create an interpolation of stolen ideas to match a prompt.
I've actually had a lot of fun using tools like Suno/Udio as a means of sonic exploration to see how some of my older compositions would sound in different mediums.
When I composed this piece of classical music practically a decade ago, it was intended for strings but at the time I only played piano so that's where it stayed. By increasing the "Audio Influence Slider", Suno arranged it in a chamber quartet style but stayed nearly 1:1 faithful with the original in terms of melody / structure.
Comparison blog piece
One thing that's interesting about the AI violin cover is that I'm not sure those runs would be physically possible at that speed on a real violin. So that composition can _only_ be played digitally, I believe.
When I used to do larger more orchestral arrangements, I was constantly getting dinged by the instrumentalists that while they were theoretically musically possible, certain runs or passages were very unnatural on the instrument that I scored them on.
For a long time I really hoped that some of the more professional notation tools such as finale would add in an ability to analyze passages and determine how realistic/natural they were for the instrument that they were set to.
And then comes Suno (and OpenAI's jukebox before that), and it felt like my brain exploded... like the classic scene in a superhero movie when the power was given to me. Is my music good? No - but I spent years writing and fashioning poetry and all of a sudden can put that to music... hard to explain how awesome that feels. and i love using the tools and it's getting better and it's been fundamentally empowering. I know it's easy to say generative art is generative swill... but "learning Suno" is no different than "learning guitar".
You may wish that learning Suno is no different than learning guitar, but I think the effects of AI may be a bit pernicious, and lead to a stagnation that takes a while to truly be felt. Nobody can say one way or the other yet. That said, I'm happy you can make music that you enjoy, and that Suno enables you to do it. Such tools are at their best when they're helping people like you.
Of course, nothing wrong with watching and appreciating a master at work. It’s just when this is sold as the illusion of education passively absorbed through a screen that I think it can be harmful. Or at least a waste of time.
The learning is in the failing; the satisfaction of landing it is in the journey that put you there.
What an insanely disrespectful take.
I'd love to see programmers reactions to having the measure of their work reduced in such a way as more people vibe code past all the technical nonsense.
Your supposed judgment on skill has nothing to do with something's value as an artform
It's a pretty absurd claim to say that learning Suno is no different than learning a musical instrument. My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
Generative tools (visual, auditory, etc.) can serve as powerful tools of augmentation for existing creators. For example, you've put together a song (melody/harmony) and you'd like to have AI fill out a simple percussive section to enrich everything.
However with a translation as vast as "text" -> to -> "music" in terms of medium - you can't really insert much of yourself into a brand new piece outside of the lyrics though I'd wager 99% of Suno users are too lazy to even do that themselves. I suppose you can act as a curator reviewing hundreds of generated pieces but that's a very different thing.
I always get a little confused when I hear non-musicians say that something like Suno is empowering when all they did was type in, "A Contrapuntal hurdy-gurdy fugue with a backing drum track performed by a man who swallowed too many turquoise beads doing the truffle shuffle while a choir gives each other inappropriate friendly tickles".
You imply "it is Prompt -> Song" but in reality it is "Prompt -> Song -> Reflection -> New Prompt -> New Song.." It is a dialogue. And in a dialogue you can get some places where neither of you could go alone.
As software developers we know that multiple people contribute to a project, inside a git repo, and if you take one's work out it does nothing useful by itself. Only when they come together they make sense. What one dev writes builds on what other devs write. It's recursive dependency.
The interaction between human and AI can take a similar path. It's not a push-button vending machine for content. It is like a story writing itself, discovering where it will end up along the way. The credit goes to the process, not any one in isolation.
Almost all naturally-generated music is derivative to one degree or another. And new tools like AI provide new ways to produce music, just like all new instruments have done in the past.
Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Suno ain't gonna invent drum and bass, just like drum machines didn't invent house music. But drum machines did expand the kinds of music we could make, which lead to house music, drum and bass, and many other new genres. Clever artists will use AI to make something fun and new, which will eventually grow into popular genres of music, because that's how it's always been done.
Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome to have mastered the musical instrument of describing or searching for things. Well, of course they can, but forgive me if I don’t buy it.
Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
When artists made trance, the creative credit didn't go to Roland for the JP-8000 and 909, even though Roland was directly responsible for the fundamental sounds. Instead, the trance artists were revered. That's good.
> Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome
I'd bet there are modern artists who sampled that music and edited it into very-common rhythm patterns, resulting in a few hit songs (i.e. The Manual by The KLF).
Musicians not just copy but everyone adds something new; it's like programmers taking some existing algorithm (like sorting) and improving it. The question is, can Suno user add something new to the drum-and-bass pattern? Or they can just copy? Also as it uses a text prompt, I cannot imagine how do you even edit anything? "Make note number 3 longer by a half"? It must be a pain to edit the melody this way.
Not everyone. I've followed electronic music for decades, and even in a paid-music store like Beatport, most artist reproduce what they've heard, and are often just a pale imitation because they have no idea of how to make something better. That's the fundamental struggle of most creatives, regardless of tool or instrument.
I haven't tried Suno, but I imagine it's doing something similar to modern software: start with a pre-made music kit and hit the "Randomize" button for the sequencer & arpeggiator. It just happens to be an "infinite" bundle kit.
As for DJ'ing I would say it is pretty limited form of art and it requires lot of skill to create something new this way.
Unless of course you mean "original" as in, some kind of wishy washy untargetable goal that's really some appeal to humanity, where any piece of information that disagrees with your hypothesis is discarded because it is unfalsifiable. Original might as well mean "Made by a human and that's it" which isn't useful at all.
Whereas I use it to mean new or different.
Literally nothing AI outputs is new or different.
You got 30 seconds, of which there might have been a hook that was interesting. So you would crop the hook and re-generate to get another 30 seconds before or after that, and so on.
I would liken it more as being the producer stitching together the sessions a band have recorded to produce a song.
If you're too lazy to put effort into learning how to create an art so you can adequately express yourself, why should some technology do all the work for you, and why should anyone want to hear what "you" (ie: the machine) have to say?
This is exactly how we end up with endless slop, which doesn't provide a unique perspective, just a homogenized regurgitation of inputs.
>too lazy
Again, I wholly reject the idea that there's a line between 'tech people' and 'art people'. You can have an interest in both art and tech. You can do both 'traditional art' and AI art. I also reject the idea that AI tools require no skill, that's clearly not the case.
>nature
This can so easily be thrown back at you.
>why should anyone want to hear what "you" (ie: the machine) have to say?
So why are we having this discussion in the first place? Right, hundreds of millions are interested in exploring and creating with AI. You are not fighting against a small contingent who are trying to covet the meaning of "artist" or whatever. No, it's a mass movement of people being creative in a way that you don't like.
• We're having this discussion because people are trying to equate an auto-amalgamation/auto-generation machine with the artistic process, and in doing so, redefining what "art" means.
• Yes, you can "be creative" with AI, but don't fool yourself-- you're not creating art. I don't call myself a chef because I heated up a microwave dinner.
A better analogy would be "I don't call myself a chef when ordering from Uber Eats".
• If throwing paint at a canvas is art (sure, why not?) then so is typing a few words into a 'machine'. Of course many people spend a considerable amount more effort than that. No different than learning Ableton Live or Blender.
• See previous points.
Yeah and it worked great until industrial agriculture let lots of people eat who had no skill at agriculture. In fact, our entire history as a species is a long history of replacing Skill with machines to enable more people to access the skill. If it gives you sad feelings that people without skill can suddenly do more cool things, thats entirely a you problem.
This is a very old argument within artistic communities.
In cinema, authorship has resoundingly been awarded to the director. A lot of film directors go deep in many creative silos, but at its core the process is commissioning a lot of artists to do art at the same time. You dont have to be able to do those things. Famously some anime directors have just been hired off the street.
In comics things went the other way. Editors have been trying to extract credit for creative work for a long time. A lot of them have significant input in the creative process, but have no contractual basis for demanding credit for that input. It frustrates them. They can also just commission work, or they can have various levels of input in to the creative process, up to and including defining characters entirely.
Really then, in your example, theres clearly a point where you have had enough of a creative input in the creation to be part of the artistic endeavor. One judge in china ruled in favour of the artist after they proved that they had completed 20 odd revisions of the artwork, before watermarking it.
That is of course, assuming we only follow your strict, reductionist argument. Even for AI art, most generators these days take more than text input. You can mask areas, provide hand drawn precursor art and a lot of other things. And that also assumes no post processing.
Not all AI generated items will be art. But what I find offensive, is the judgement that as a class nothing touched by AI could be considered art. Mostly because I lived through "Digital Art is not Art" and "Computer Games are not Art" proponents of both got overtaken by history and rightly shamed.
If I ask a comics guy their favorite comic artist they aren't giving me back editors names. They will have favorite editors, or even editor artist pairs, but the artist remains distinct from that.
I simply posited that commissioning a piece of work does not make you an artist. Having art generated for you to your taste is not 'making art'. Hiring an interior decorator to decorate my house does not mean I decorated. Ordering off a menu and requesting extra cheese does not make you part chef.
A better blurring for your argument would be the use of session musicians. If I say I love The Beach Boys, how much of what I love is session musicians work versus Brian Wilson's? Is he the artist that I enjoy? But that gets back to it, doesn't it. We as humans want to connect art with it's creator. Why? Because art is some reflection of something. Art is 'life is a shared experience'. AI 'art' is not part of that shared experience. I want to connect with Brian Wilson. But I don't connect with some music critic who writes about Brian Wilson's music even though we both connected with the same artistic work, even if I learned about the work though the critic making my relationship to them just as important (I wouldn't know it without them). There being an artist in the middle improves/transforms it/means something (what it means is what is up for discussion).
A pretty crystal is just as pretty as a piece of art, but it is not a piece of art. AI art might be more like the crystal. It might contain beauty/interest/capture attention. But it's not connecting with someone's creation, with intention. I have a local museum and I love exhibits that a specific curator there has focused on more than ones they didn't touch. But that doesn't make them an artist. AI 'artists' fall into that category.
No but its the same genetic fallacy. Some digital works arent art. Therefore all Digital art is not art. These people were rightly ridiculed.
Suggesting that because some people put no effort into AI Art, that AI art as a category cannot be art is also a silly genetic fallacy.
>If I ask a comics guy their favorite comic artist they aren't giving me back editors names. They will have favorite editors, or even editor artist pairs, but the artist remains distinct from that.
Correct. Because the authorship debate in that space settled in the opposite direction. If Comic Editors succeeded and were treated like film directors, they would have headline billing on comics and they would be a household name. But it went the other way, and instead Editors who try to claim credit for artistic works, even with receipts, get laughed at.
>I simply posited that commissioning a piece of work does not make you an artist.
Right, but the implication there is that is all people using AI generators do.
>Hiring an interior decorator to decorate my house does not mean I decorated.
Right, but if you are giving the interior decorator creative input, like, "No that sucks this should be red" and revising their output hundreds of times, you are actually involved in the decoration process. And if that decorator is just, hanging up exactly what you tell them to, then they might just be a dogsbody and you the interior decorator.
>I have a local museum and I love exhibits that a specific curator there has focused on more than ones they didn't touch. But that doesn't make them an artist. AI 'artists' fall into that category.
Some do. But the vast majority put a lot more effort in than simple curation. I remember seeing people, when Midjourney first became viable, simply generating 12 images with a single prompt, and sharing all 12 on facebook to pages that wanted nothing to do with them. Thats not art. But its also not the done thing anymore.
Look, sarcasm aside, for you and the many people who agree with you, I would encourage opening your minds a bit. There was a time where even eating food was an intense struggle of intellect, skill, and patience. Now you walk into a building and grab anything you desire in exchange for money.
You can model this as a sort of "manifestation delta." The delta time & effort for acquiring food was once large, now it is small.
This was once true for nearly everything. Many things are now much much easier.
I know it is difficult to cope with, because many held a false belief that the arts were some kind of untouchable holy grail of pure humanness, never to be remotely approached by technology. But here we are, it didn't actually take much to make even that easier. The idea that this was somehow "the thing" that so many pegged their souls to, I would actually call THAT hubris.
Turns out, everyone needs to dig a bit deeper to learn who we really are.
This generative AI stuff is just another phase of a long line of evolution via technology for humanity. It means that more people can get what they want easier. They can go from thought to manifestation faster. This is a good thing.
The artists will still make art, just like blacksmiths still exist, or bow hunters still exist, or all the myriad of "old ways" still exist. They just won't be needed. They will be wanted, but they won't be needed.
The less middlemen to creation, the better. And when someone desires a thing created, and they put in the money, compute time, and prompting to thusly do so, then they ARE the creator. Without them, the manifestation would stay in a realm of unrealized dreams. The act itself of shifting idea to reality is the act of creation. It doesn't matter how easy it is or becomes.
Your struggle to create is irrelevant to the energy of creation.
It may be nice for society that ordering food is possible, but it doesn’t make one a chef to have done so.
But if you ordered 100 dishes iterating between designing your order, tasting, refining your order, and so on - maybe you even discover something new that nobody has realized before.
The gen-AI process is a loop, not a prompt->output one step process.
You might not be a creator, but you could make an argument for being an executive producer.
But then, if working with an artist is reduced to talking at a computer, people seem to forget that whatever output they get is equally obtainable to everyone and therefore immediately uninteresting, unless the art is engaging the audience only in what could already be described using language, rather than the medium itself. In other words, you might ask for something different, but that ask is all you are expressing, nothing is expressed through the medium, which is the job of the artist you have replaced. It is simply generated to literally match the words. Want to stand out? Well, looks like you’ll have to find somebody to put in the work…
That being said, you can always construct from parts. Building a set of sounds from suno asks and using them like samples doesn’t seem that different from crate digging, and I’d never say Madlib isn’t an artist.
I will say Michelangelo was particularly controlling and distrusting of assistants, and uniquely did more work than other master artists of the time, but the point remains. The vision has always been the value.
With AI, there is a vision and there is a tool executing it. This has a recursive loop involving articulation, refinement, repetition. It is one person using a tool to get a result. At a minimum, it is characteristically different than your comparison, no?
To add, my original statement was concerning going into a grocery store and buying ingredients. That was once a much more difficult process.
As an aside it reminds me of a food cart I would go to regularly in Portland. Sometimes the chefs would go mushroom foraging and cook a lunch using those fresh mushrooms. It was divine. If we ever reach a time when I can send a robot out to forage for mushrooms and actually survive the meal, I would celebrate that occasion, because it would mean we all made it through some troubling times.
Banging two sticks together is music. Get off your high horse.
Do you have ANY IDEA how hard these things are to play well.
I don't care if haphazard bashing of sticks with intent to make noise counts as 'music'. I do care if this whole line of discussion fundamentally equates any such bashing with, say, Jack Ashford.
I would be surprised if the name meant anything to you, as he's more obscure than he should be: the percussionist and tambourine player for the great days of Motown. Some of you folks don't know why that is special.
If I write a song about my kid and cat it's funny for me and my wife. I don't expect anyone else to hear or like it. It has value to me because I set the topic. It doesn't even need to be perfect musically to be fun for a few minutes.
People are mixing and matching these songs and layering their own vocals etc to create novel music. This is barely different from sampling or papier mache or making collages.
People made the same reductionist arguments you're making about electronic music in the early days. Or digital art.
I give my idea to the model, the model gives me new ideas, I iterate. After enough rounds I get some place where I would never have gotten on my own, or the model gotten there without me.
I am not the sole creator, neither is the model, credit belongs to the Process.
So if I have a melody in my head, how do I make AI render it using language? Even simpler, if I can beatbox a beat (like "pts-ts-ks-ts"), how do I describe it using language? I don't feel like I can make anything useful by prompting.
I've been recording myself on guitar and using suno to turn it into professional quality recordings with full backing band.
And I'm not trying to sell it, I just like hearing the ideas in my head turned into fully fleshed music of higher quality than I could produce with 100x more time to invest into it
Actually having an "autotune" AI that turns out-of-key poor singing into a beautiful melody while keeping the voice timbre, would be not bad.
Yes the barrier for entry is low, but there is a very high ceiling as well
These tools will probably be great for making music for commercials. But if you want to make something interesting, unique, or experimental, I don't think these are quite suited for it.
It seems to be a very similar limitation to text-based llms. They are great at synthesizing the most likely response to your input. But never very good at coming up with something unique or unlikely.
Controlling that flow of generation, re-prompting, adjusting, splicing, etc. to create a unique song that expresses your intention is significantly more work and requires significantly more creativity. The more you understand this “instrument”, the more accurate and efficient you become.
What you’re comparatively suggesting is that if a producer were to grab samples off Splice, slice them and dice them to rearrange them and make a unique song, that they didn’t “actually” make music. That seems like it would be a more absurd position than suggesting AI could be viewed as an instrument.
Tools like Suno make people feel like “their own music” is good and they have accomplished something because they elevate the floor of being bad at a tool (like all technological improvements do). They feel like they have been able to express their creativity and are proud, like a kid showing off a doodle. They share it with their friends, who will listen to it exactly one time in most cases and likely tell them it is “really good” and they “really like it” before never listening again.
That type of AI use is akin to a coloring book, but certainly doesn’t make for “good” music. When a kid shows off their badly colored efforts proudly, should we yell at them they aren’t doing “real art”, that their effort was meaningless, and that they should stop acting proud of such crap until they go to art school and do it “properly”?
No it absolutely is not.
Where playing an instrument means balancing the handling of tempo, rhythm and notes while mastering your human limitations, a tool like SuperCollider lets you just define these bits as reactive variables. The focus in SuperCollider is on audio synthesis and algorithmic composition, that's closer to dynamically stringing a guitar in unique rule-based ways - mastering that means bridging music- and signal-processing theories while balancing your processing resources. Random generators in audio synthesis are mostly used to bring in some human depth to it.
Ehh... No.
It most definitely is different and you’ve proven it with your own post. Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Look, if it gives you pleasure to make Suno music then you should do it, but if you think having an ai steal a melody and add it to your songs counts is the same as creating something, you’re kiddo by yourself. At best you are a lyricist relying on a robocomposer to do the hard part. You could have achieved the same thing years ago by collaborating with a musician like Bernie Taupin did with Elton John.
There are drawbacks to being a skilled (trained/practiced) musician. You specialize in one instrument, and tend to have your creativity guided by its strengths/weaknesses.
I think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation.
We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0].
Then, CG became its own community and vocation, and true artists started to dominate.
Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
We'll see that, when AI music generation comes into its own. It's not there, yet.
Really? Except the minor part in which a great master spent months to years creating one of his works, instead of a literally mindless digital system putting it together (in digital, no pigments here) instantly.
The technology is impressive, sure, but I see nothing artistically impressive about it, or emotionally satisfying about the utter lack of world and life of creation it lacks.
I’m an artist (the old-fashioned kind).
That’s where I come from, so my viewpoint is colored by my own experience and training.
Also, "old-fashioned"? This to imply that someone rendering painterly visuals in seconds with AI is some new kind of artist? If so, then no, what they do isn't art to begin with. That at least requires an act of effortful creation.
I spent some time, making CG art, and found it to be very difficult; but that was also back before some of the new tools were available. Apps like Procreate, with Apple Pencil and iPad Pro, are game-changers. They don't remove the need for a trained artist, though.
But really, some of the very best stuff, comes quickly, from skilled hands. Van Gogh used to spit out paintings at a furious pace (and barely made enough to live on. Their value didn't really show, until long after his death).
Briefly instructing an image model to imitate an Old Master and having it do so in seconds fulfills none of those needs, and at least to me there's nothing impressive about it as soon as I know how it was created (yes, there is a distinction there even if at first glance at a photo of a real old master and an AI-rendered imitation, it might be hard to note a difference)
The latter is not art, and the people who churn it out with their LLM of choice are not artists, at least not if that's their only qualification for professing to be such.
When airbrushing became a thing, “real” artists were aghast. They screeched about how it was too “technical,” and removed the “creativity” from the process. Amateurs would be churning out garbage, dogs and cats would be living together, etc.
In fact, airbrushes sucked (I did quite a bit of it, myself), but they ushered in a new way of visualizing creative thinking. Artists like Roger Dean used them to great effect.
So people wanted what airbrushes gave you, but the tool was so limited, that it frustrated, more than enabled. Some real suckass “artists” definitely churned out a bunch of dross.
Airbrushing became a fairly “mercenary” medium; used primarily by commercial artists. That said, commercial artists have always used the same medium as fine artists. This was a medium that actually started as a commercial one.
Airbrushing is really frustrating and difficult. I feel that, given time, the tools could have evolved, but they were never given the chance.
When CG arrived, it basically knocked airbrushes into a cocked hat. It allowed pretty much the same visual effect, and was just as awkward, but not a whole lot more difficult. It also had serious commercial appeal. People could make money, because it allowed easy rendering, copying, and storage. There was no longer an “original,” but that really only bothered fine artists.
This medium was allowed to mature, and developed UI and refined techniques.
The exact same thing happened with electric guitars, digital recording and engineering, synthesizers, and digital photography. Every one of these tools, were decried as “the devil’s right hand,” but became fundamental, once true creatives mastered them, and the tools matured.
“AI” (and we all know that it’s not really “intelligence,” but that’s what everyone calls it, so I will, too. No one likes a pedant) is still in the “larval” stage. The people using it, are still pretty ham-handed and noncreative. That’s going to change.
If you look at Roger Dean’s work, it’s pretty “haphazard.” He mixes mediums, sometimes using their antipathy to each other to produce effects (like mixing water and oil). He cuts out photos, and glues them onto airbrushed backgrounds, etc. He is very much a “modern” creative. Kai Krause is another example. Jimi Hendrix made electric guitars into magical instruments. Ray Kurzweil advanced electronic keyboards, but people like Klaus Schultze, made them into musical instruments. These are folks that are masters of the new tools.
I guarantee that these types of creatives will learn to master the new tools, and will collaborate with engineers, to advance them. I developed digital imaging software, and worked with many talented photographers and retouchers, to refine tools. I know the process.
Of course, commercial applications will have outsized influence, but that’s always the case. Most of the masters were sponsored by patrons, and didn’t have the luxury to “play.” They needed to keep food on the table. That doesn’t make their work any less wonderful.
We’re just at the start of a new revolution. This will reach into almost every creative discipline. New techniques and new tribal knowledge will need to be developed. New artists will become specialists.
Personally, I’m looking forward to what happens, once true creatives start to master the new medium.
Suno isn't a tool. Tools are characterized by precision and a steep learning curve, and "AI" is nothing like.
The fact people still think this is how these models work is astonishing
Even if that were true, sampling is an artform and is behind one of the most popular and succesful genres today (hip hop). So is DJ'ing or is that also not a skill?
The same puritanism that claimed jazz wasn't music, then rap wasn't music, then EDM wasn't music, blah blah
Gatekeepers of what is and isn't art always end up wrong and crotchety on the other side. It's lame and played out.
Sure, make the models credit the original artists, who cares. That doesn't change if it's an art that should be respected or not.
I never said Suno wasn’t “art”. The opposite is true. If you want to put your name on something that took no effort or skill and call it art, more power to you. You could do the same in other areas, and lame, low effort “art” proceeds AI by millennia. You are as welcome as anybody to call yourself a creator, however lame that effort may be.
But man the chutzpah of comparing that low effort drivel with people pushing genre boundaries.
Wrong. It takes extremely long before you can make the sounds in your head fit into the scale and recognize them, however with Suno it is impossible.
I would compare Suno to a musician-for-hire. You describe what you want, some time later he sends you the recording, you write clarifications, and get second revision, and so on. Suno is the same musician, except much faster, cheaper and with poor vocal skills. Everything you can do with Suno today, you could make before, albeit at much higher price.
But we might need new vocabulary to differentiate that from the act of learning & using different layers of musical theory + physical ability on an instrument (including tools like supercollider) + your lived experience as a human to produce music.
Maybe some day soon all the songs on the radio and Spotify will be ai generated and hyper personalized and we’ll happily dance to it, but I’ll bet my last dollar that as long as humans exist, they’ll continue grinding away (manually?) at whatever musical instrument of the time.
So, this songeater guy, what a poser eh?
>> think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation. We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0]. Then, CG became its own community and vocation, and true artists started to dominate.
Hey its likely not going to be me, but let's be real - any user of this technology who has gone beyond the "type in a prompt and look i got a silly song about poop" stage will probably agree - someone's going to produce some bangers using this tech. It's inevitable and if you don't think so it's likely you haven't done anything more than "low-effort" work on these platforms. "Low effort" work - which a majority of AI swill us - is going to suck, whether its AI or not.
And while I have the forum, I do want to make another point. I pay more for a month for Suno than Spotify ($25 vs $9). Suno/Udio etc: do what you need to to make sure the artists and catalogues are getting compensated... as an user I would pay even more knowing that was settled.
You downplay the training it takes to actually use your body to output the notes. With the guitar your fingers have to "learn" as much as your brain on a scale that no prompt input will ever match. And I say that as a musician that use mostly sequencers to compose.
all the discourse with this remark is quite fascinating as an observer. similar remarks used to be said about electronic music or just use of conventional daw when they were new.
to those who have dedicated years into their craft: one must not mix self-expression from the mechanics of getting there. it is very respectable to dedicate one's life to the analogue way. but if something lets you get there in a different way, allow it.
I wonder how AI-assisted music production like Suno will change the profession of being a musician. I think people want their favorite music artists to be real humans they can relate to. For that reason, I guess real singers won't be out of a job anytime soon. The same may apply to performers of real musical instruments. No one wants to see music played entirely from a computer during a live concert.
However, I predict that it will be very difficult to become even moderately well-known as a musician by just being a Suno Studio creator alone. A lot of good-sounding content will be created this way, and if an artist can't perform live or doesn't have a unique persona or story to attract an audience, it'll be hard to stand out from the endless mass of AI-generated content.
Tomorrowland begs to differ
However I do care that the person who created the music made hundreds of micro decisions during the creation of the piece such that it is coherent, has personality and structure towards the goal of satisfying that individuals sense of aesthetics. Unsurprisingly this is not something you get from current AI generated music.
But for cases where music is the primary product, I don’t for see AI generated music overtaking anything
They give users (players?) a sense of agency, making it satisfying. But in reality, you're no more composing than a guitar-hero player is playing the guitar, and nor will you learn how to from doing so. No matter how sophisticated the transformations in an LLM, you're ultimately using other people's music in a sophisticated mashup game.
However, in guitar hero, the people's who's music was being used at least got royalties. :-/
Ge Wang (professor in my field) wrote a great article on why LLMs are so uninteresting from a musical perspective. https://hai.stanford.edu/news/ge-wang-genai-art-is-the-least...
What people miss is since the creation of Splice, basically all new music that isn’t from an already established artist is paint by numbers. You can get any sample in any key that are given to splice by artists. You probably hear a lot of the same sounds in most modern music. This breaks that open.
The Suno team has been doing this exactly right and this is just another step in their evolution.
Major congrats to the product team for this, I can’t wait to see the next iteration!
I tried to sample a few songs generated by others, but I can't find their appeal.
I've played around with Suno for a couple of months. It works for some things, but to me - it just doesn't give any sense of...accomplishment? I'd much rather sit down with my instruments, and come up with the stuff myself.
What is more, I get no feel of ownership. It is not me that's making music, I'm just feeding it prompts. That's it.
It's like paying some painter 5 bucks, and telling them what to paint. In the end, you'll have your painting, but you didn't paint it.
With that said, these tools have their uses. Generating jingles and muzak is easier than ever.
That didn’t stop Andy Warhol from becoming a famous artist.
With Suno, using your audio uploads to effectively 'filter' your ideas into something usable, compositing in the DAW, and putting a full song together -- that's going to be a much different experience than "make me a song"
AI is a shortcut from an idea to something that resembles your idea in solid form. If your goal is to create something and get a sense of accomplishment from that creative act then AI will never work for you because the artistic process is exactly what the AI shortcut is short-cutting around.
On the other hand, that AI shortcut is circumventing decades of practice, so if you're at the start and you just want the output so you can use it in some way it's awesome.
I wonder how they trained the model to generate clips, did they hire lot of musicians to record the samples, or just scraped multiple commercial instrument libraries without permission?
- extracting melody/instrument from a clip to be able to edit the notes and render it back with the same instrument
- extracting and reordering stems in a drum clip
- fixing timing and cleaning noise from sloppily played guitar melody
- generating high-quality multi-mic instrument samples for free
- AI checking the melody and pointing out which notes are wrong or boring and how it can be fixed. I want to write the notes myself but help would be very useful.
- AI helping to build harmony (pick chords for the melody for example)
This would help a lot, but current models that generate a song without any controls, are not what I want.
Why do we continue to prop up these companies when there are ethical alternatives? We are rapidly replacing all experts with AI trained in their data, and all the money goes to the AI companies. It should be intuitively obvious this isn’t good.
thesparks•4mo ago
Jordan-117•4mo ago
jjangkke•4mo ago
Suno 6 should solve those issues.