Adobe Podcast Studio: A First Look
15:07
Пікірлер
@ilblues
@ilblues 2 күн бұрын
Where podcasts are conversational, why wouldn't the goal be to keep it conversational? So I have this mental picture ... friends come for a visit and one of them breaks out a megaphone. Soon everyone has a megaphone to keep up with the guy who started it. Just like that it's no longer a friendly and pleasant conversation, but a blast session. Where compression helps me - like many speakers, I tend to trail off at the end of phrases. Compression just evens things out without crushing the audio. On a similar note - I've been releasing a few of my original songs as BONUS material but before doing so, they get re-mastered from the original tracks - with little to no compression. I'm liking the new mixes better as I've always regretted crushing them when mastering for CD. About crushing the audio? The program I used back then is called Voxengo Elephant.
@jesse.mccune
@jesse.mccune Күн бұрын
It's pretty normal for people to sort of trail off as they run out of breath towards the end of longer thoughts. Compression is one method to address this, though it's not my favorite. I feel that upward expansion or some sort of leveler can work better and more transparently in those situations. I use the leveler built into Sonible'ss mart:Comp for this. It's not 100%, but works most of the time to level out the audio so we don't have to compress as heavily. It's a two-pronged approach which I think helps avoid over compression and does so with more transparency. When it comes to music, I prefer more dynamic music and have heard a number of "demos" which actually sound better than the mastered releases. By demos, I mean finished mixes that haven't been sent out for mastering yet. They generally sound more open to me and maybe a little more 3d, whatever that means.
@OneStepToday
@OneStepToday 2 күн бұрын
how to use it for live streaming audio, the spectral denoise?
@jesse.mccune
@jesse.mccune 2 күн бұрын
You won't be able to use Spectral De-noise for live streaming. It's built for post-production and not capable of processing live audio.
@jesse.mccune
@jesse.mccune 5 күн бұрын
I mentioned an example I didn't have time for. Here are the links to the same recording from the Futur featuring Chris Do and Jule Kim. One is the video that is on KZbin, and the other is the podcast. Which one sounds better to you? The Futur Podcast (Ep 282): *start at 3:42*: lnns.co/M-6oNDRl7uC The Futur KZbin: kzbin.info/www/bejne/rmfdaWCQdr2Zfqcsi=kd3EL_uqLFnuOfC4&t=243
@Tomaslav16
@Tomaslav16 5 күн бұрын
Здравия! вышло обновление у dxRevive Pro ! 1.2.2 Планируешь ли сделать обзор работы новых алгоритмов?
@jesse.mccune
@jesse.mccune 5 күн бұрын
I do plan on doing a video about dxRevive 1.23.
@sorlag110
@sorlag110 6 күн бұрын
It really can't hold the natural pitch of the person so it raises and sinks very robotically like using a pitch shifter. It makes mumbled words sound even worse because often the brain can make out mumbled words but the enhance changes it to something unrecognizable. And I hear it used everywhere lately, in serious interviews with important people and it totally feels like the equivalent of seeing six fingered people in stock images.
@jesse.mccune
@jesse.mccune 6 күн бұрын
I agree completely. I think we're hearing it more often for a couple reasons. One, they have a free version. Two, they've rolled the monstrosity into Premiere now. Three, since so many people have Adobe subscriptions, it's become a standard feature amongst their users. DaVinci and Final Cut Pro each have their own version, but both of the focus on noise and reverb reduction while Adobe's does all the weird things to the voice.
@xanthepaige9456
@xanthepaige9456 7 күн бұрын
Your videos are so helpful, thank you! As someone pulling my hair out (and destroying my wrist) spending hours every day editing audio, this is all great info. Going to take a look at Hindenburg, because the ability to edit from a text transcript is really what I need.
@jesse.mccune
@jesse.mccune 7 күн бұрын
Thanks for the comment. It's always nice to hear that these videos are helping people out there. There's not much I can do to help save your hair, but I have some advice for the wrist. When I first started editing, I was getting a lot of pain in my wrist. In my case, I was using the Apple Magic Mouse. I quickly realized that it's not a very ergonomic mouse design. I ended up choosing the Logitech MX Master and the days when I feel wrist pain are much, much less frequent. It is programmable and has a horizontal and vertical scroll wheel, which makes it so I can do 85% of my Hindenburg editing from the mouse.
@xanthepaige9456
@xanthepaige9456 7 күн бұрын
@@jesse.mccune I love this! Had the tab open to buy it after watching your video, sounds like a game changer. Thanks again!
@jesse.mccune
@jesse.mccune 6 күн бұрын
@xanthepaige9456 I know it has been for me, not only in terms of wrist pain, but also in terms of efficiency. Are you editing for yourself, others, or both? Interviews or solo episodes?
@xanthepaige9456
@xanthepaige9456 6 күн бұрын
⁠@@jesse.mccuneI’m editing for myself - I record and edit at least an hour of finished audiobook content basically every weekday, so the more efficient I can get, the more of my daylight hours I can reclaim!
@jesse.mccune
@jesse.mccune 6 күн бұрын
@xanthepaige9456 I would think the combo of Hindenburg and a programmable mouse should help you reclaim some of those daylight hours. Those two elements provided me the biggest time savings. The rest has come from repetition from practice and editing.
@chowpokin
@chowpokin 9 күн бұрын
14:13 Two Dialog Isolate: Best in the screen. thanks your great test.
@jesse.mccune
@jesse.mccune 8 күн бұрын
Thanks, I'm glad you found it helpful.
@mikeserman
@mikeserman 11 күн бұрын
I've encountered drifting when aligning riverside .wav with their .mp4s - have you experienced this?
@jesse.mccune
@jesse.mccune 11 күн бұрын
I haven’t run into this recently, but I’ve had a couple instances where this was an issue. Tech support was able to reprocess the files and correct the sync issues. Having said that, I have seen a ton of reports of sync issues with Riverside over the last month or two. It seems some of the newer AI features have caused a lot of issues. Another thing that has been blamed for issues lately is that Riverside quietly increased their minimum tech specs to requiring much stronger computers than in the past.
@xyzmedia5161
@xyzmedia5161 11 күн бұрын
Any thoughts on podcasting reaching a point of oversaturation? There seems to be too many of them at this point. Or will the industry keep rising for years to come?
@jesse.mccune
@jesse.mccune 9 күн бұрын
I look at it like any other form of content. They're all saturated and a lot of what's out there is low effort, low quality content, whether we're talking about blogs, YT videos, podcasts, music, books, whatever. I don't see any reason the industry would just die out. If we look at music and books, they keep going and we get countless new book and music releases each week by pros and indie/hobbyists alike. As the amount of content grows, it becomes a matter of being able to market oneself well to find your way to your audience. The saying that cream rises to the top is at play here. Low quality, low effort content will sit there wasting server space collecting digital dust while those who put out quality content will tend to do better.
@greatlandmedia9108
@greatlandmedia9108 11 күн бұрын
About the so-called “perpetual license”. The Hindenburg folks have bastardized the term and are not giving a true perpetual license. With a true perpetual license, one you have installed the software and input the license information, there is not limit to future use. You *can* paint yourself into a corner by upgrading the operating system to the point where the software is no longer compatible, but that’s the only limiting factor. NOT SO WITH HINDY 2.0’s so-called perpetual license. With Hindy 2.0 you have to have your “perpetual license” essentially “reauthorized” every 90 days by connecting your computer to the internet and allowing the software to connect to the Hindenburg servers for reauthorization. What happens if you can’t connect? What happens if their servers go down? What happens if they go out of business, or they simply choose to stop the authorizing their so-called perpetual license customers? You’re screwed. This is not a true perpetual license, this is a joke and a fraud. It is dishonest, and in my opinion of questionable legality. Certainly not very ethical.
@jesse.mccune
@jesse.mccune 7 күн бұрын
I'm not a fan that it phones home and that it has to authorize every so often, but I feel it still falls within the definition of a perpetual license. As long as the company stays in business, the perpetual license should give us access to Hindenburg 2 for as long as it is supported for the OS we are using. The EULA states: "includes all updates provided for the version of a Product that is available at the time of acquiring the licence, up to and including version x.99 of that version. To access a new version, a paid upgrade fee is required." In my eyes, that qualifies as a perpetual license. The licensing terms do mention having to authorize every 60 days under a perpetual or annual subscription plan. While I'm not a fan of it phoning home, I will take it over other forms of authorization like dongles and challenge/response. For me, when it's implemented well, it's invisible to my daily operation. I prefer it over challenge and response and dongles. I had some issues with the phoning home thing for about months with H2 where I had to log in to authorize multiple times a week, sometimes multiple times in a day. It has been smooth for awhile, but I have reliable internet and don't tend to work somewhere without internet, but I can see how that would be a pain point for people in situations where they don't always have internet when they're working. There have been a couple times where I didn't have certain plugins because my internet was down or the iLok cloud servers were down, so I've been there.
@greatlandmedia9108
@greatlandmedia9108 6 күн бұрын
A perpetual license should not, and cannot rely on the issuing company remaining in business and continually giving you permission to use something you already have permission to use. That is why it is not technically or functionally a perpetual license, despite their use of the term. A true perpetual license would allow a user to continue to use the software even after the company goes out of business. Perpetually. Hence the term and the definition. Unfortunately, like so many companies these days, the Hindenburg folks have chosen to redefine the English language to mean what they want to mean, not what it actually means. And unfortunately, too many people are willing to accept this.
@jesse.mccune
@jesse.mccune 6 күн бұрын
@greatlandmedia9108 I can tell you're passionate about this topic. I struggle to think of many pieces of software with perpetual licenses that don't phone home at one time or another, whether it's to check for updates, authorize, or to check that you don't have their software running on multiple machines. It's a pain, but I do see the need for them to protect their IP. I am one of the people who choose to accept it because my options would otherwise be limited. I don't look at the "what might happen" scenarios. If Hindenburg goes out of business next year and I'm stuck with an unauthorized version that won't open, I'd move on. All the bugs and frustrations of working with Hindenburg today are bigger concerns to me than anything else. But that's me. I know we don't all view things the same way, so I accept your view point as equally valid as mine. I find it interesting to see how we all see things differently.
@ES60Hz
@ES60Hz 13 күн бұрын
This is not a noise removal, it is clear that this is a text to speech AI voices.
@jesse.mccune
@jesse.mccune 13 күн бұрын
I think it's a bit of both. At least that's what it sounds like to me. They apply very aggressive noise reduction and then mix in an AI clone underneath to fill in the rest. Whatever it is, it does not sound good.
@ES60Hz
@ES60Hz 13 күн бұрын
The weird thing is that with in-ear monitors it’s hard to notice this bad effect, Only with Speaker monitors, So maybe this is why some people tell say that it sounds good and other people say that it sounds bad, At least this is my experience with my monitors, did you notice it too?
@jesse.mccune
@jesse.mccune 4 күн бұрын
That interesting because I find it easy to hear with headphones. I think a big part of it is many people hear that the noise and reverb have been removed and are happy. They aren't listening to how the voice has been changed or the artifacts that are now present. Some may not even be able to hear these things. Many podcasters are playing the role of engineer, producer, writer, editor, mixer, and all the other things without necessarily having the knowledge and experience needed to play all of those roles. All we can do is hope that these tools get better because there's way too much low quality content being put out by people using these AI tools to compensate for a lack of skill.
@dgrand7917
@dgrand7917 13 күн бұрын
When i use it i have a blue die screen
@jesse.mccune
@jesse.mccune 13 күн бұрын
Have you downloaded the latest version? They recently ended their OBT1 phase and have started with a new version for OBT2. product.supertone.ai/shift
@ilblues
@ilblues 14 күн бұрын
Thoughtful interview, Jesse. I'd never use Hindenburg for what I do - Audacity and Reaper do just fine. Your description of Hindenburg leadership and customer service responses and what sounds like vaporware promises, reminds me of the Commodore Amiga which was my first computer. That kind of stuff did them in - small company - tunnel-vision owner and profiteer - always trying to sell new faster machines to a fixed customer base - more and more of them peeling off and moving to the PC or Mac for wider and more stable software selection. Still miss it as a fun machine that did some things very well - but not keeping up with user wish-list stuff killed it.
@jesse.mccune
@jesse.mccune 13 күн бұрын
Thanks, Jack. I feel Hindenburg has done a good job of creating an easy to use DAW for dialog editors and version 1 was solid. I don't think there were vaporware promises, t least not yet. I think they have every intention of delivering those features, but lack the leadership experience to know that there are two options here: • announce the new features that will be included, but make it clear they will not be available on release day and give an estimate of when to expect them to be released • only announce the features that will ship on release day My friend had a Commodore Amiga. It was the first computer I got to use outside of computer day at school. I don't remember much about it, though since I was still a kid. "but not keeping up with user wish-list stuff killed it." This is where Hindenburg is right now. There are so many companies building a lot of brand loyalty by listening to the needs and wants of their clients. Hindenburg doesn't function this way and seem to not see how much damage they are inflicting on their brand reputation. Brand is everything these days if a company wants to become successful.
@ilblues
@ilblues 13 күн бұрын
@@jesse.mccune Sounds like Hindenburg ought read up on the difference between a push and pull system of design. What you're describing is a push system and users won't stand for that for long.
@jesse.mccune
@jesse.mccune 12 күн бұрын
@ilblues I'm unfamiliar with those methods, but I get the gist. There are a few things they could benefit from by reading up on if they want to remain a sustainable company or grow into something more than a niche product for radio producers.
@StephenCarterStressExpert
@StephenCarterStressExpert 14 күн бұрын
After having used Hindenburg Pro for several years - having paid for a "perpetual" license - I was more than disappointed and annoyed when they stopped supporting H1 and went to the subscription model. I ultimately purchased the new subscription version at the "reduced" price for those who had licenses for the original version. My annual renewal came due a month or so ago. I thought long and hard before committing to another year. I like the functionality of the application and use it to produce several podcasts. Your description about how they handled the launch is spot on. It was terrible. Some features promised with great fanfare now aren't even mentioned. Their silence is deafening. I hope they get their act together, but it seems like they continue to limp along slowly. I'm likely going to move my production to Reaper as the year rolls on. Such a shame.
@jesse.mccune
@jesse.mccune 14 күн бұрын
It's definitely been a bumpy transition from H1 to H2. Out of curiosity, did you know they offered H2 with a perpetual license, or did you knowingly choose the subscription? I like the ideas that Hindenburg offers, but many times it feels like they miss the mark. They created a tool that could become a powerful program if they could get out of their own way and start doing product and market research. I had a conversation with Nick, who's a nice guy, but I didn't feel like I was being listened to as a frustrated customer. I heard a lot of excuses and the only real acknowledgment was he agreeing the roll out was terrible. He said they will start creating features that will appeal to the more experienced engineers and editors, but we'll have to be patient. I don't know how much patience I have left at this point. There were comments that the bugs and flaws with the program are because of the platform and not actually bugs with Hindenburg. As a user, I don't care where the blame lies, I just know that I dealing with constant bugs. Even if it's the platform they have built Hindenburg on, who chose that platform? My general takeaway was that Hindenburg either doesn't have the resources to hire the staff they need or the experience to understand why it's hurting them to keep operating the way they have been. The way they've handled the rollout and customer feedback is the type of mishap that damages brands in deep ways and they still have done nothing to try to fix that. I tried to get through to Nick that the silence is the thing that has really made things worse for many of us and that it would help if they started communicating more. He said they were going to start doing that. I haven't really seen that. Their latest release with the Effects Presets, which is likely to be a big thing for most of their serious users, was quietly released. No announcements, no emails. I only knew about it because someone in my community asked about it. I'm willing to give them the benefit of the doubt, but I have concerns that they will indeed get their act together.
@StephenCarterStressExpert
@StephenCarterStressExpert 14 күн бұрын
@@jesse.mccune Totally agree. Props to you for reaching out to Nick and other Hindenburg people. I was aware of the new perpetual license option, but I wasn't - and still am not - sure they're going to last long enough to make the new perpetual license fee worth the money. As time permits I'm playing with Reaper as a potential replacement. It's overkill for podcast editing, but other than a one time fee of $60, there's no recurring annual or large upfront expense. We'll see how things go over the coming months. I enjoy H2's ease of use and basic features, but unless I see significant change in their communications and development, progress over the coming months, my thinking is now it's time to move on next year, probably to Reaper.
@ilblues
@ilblues 14 күн бұрын
@@StephenCarterStressExpert With Reaper, I believe that $60 buys you all updates for a single version ... i.e., when I bought 6, I was golden until they came out with 7 and then had to pay again or stick with 6.
@jesse.mccune
@jesse.mccune 12 күн бұрын
@StephenCarterStressExpert It was actually Nick who initiated the conversation because I've encountered so many bugs over the last year under the premise of giving some context. That context sounded more like excuses and justifications than one of listening to my experience and feedback. Nick is a nice guy and passionate about what they do. He does want to make Hindenburg a great tool for audio storytellers. And his definition of audio storytellers are radio shows and narrative style podcasts. It left me with the feeling that interview podcasts are something they don't really know much about or even care about. Comments like "if you need more than 6 plugins you're overthinking it" and "we can't create a strip silence tool because people will misuse it and ruin things by removing all the ambiance". I asked some questions about their brand and who their target audience is and he couldn't answer. They market it as a tool for podcasters, but kind of look down on interview shows. There's something about some of these older radio guys where they look down on interview podcasts like they are lesser than. For me, I can't get fully behind a product when leadership keeps telling me their tool isn't meant for me. They've dug themselves into a giant hole and now have to figure out how to get out of it. I've heard good things about Reaper. I wouldn't say it's any more overkill than any other DAW for editing podcasts. With the rate that Descript, Riverside, and Podcastle are developing, DAWs, even Hindenburg, may well become overkill and something that only us dinosaurs use to edit.
@fire.goddess777
@fire.goddess777 16 күн бұрын
Wow very helpful! Thanks so much for the clarity!
@jesse.mccune
@jesse.mccune 16 күн бұрын
I'm happy you found it helpful.
@orisabwatermd3398
@orisabwatermd3398 19 күн бұрын
What is your recommendation for someone with no knowledge of audio editing who wants a simple tool?
@jesse.mccune
@jesse.mccune 18 күн бұрын
Are you working with podcasts or something else? How many dialog tracks do you tend to have? What are you looking to achieve with this tool? Noise and reverb reduction? EQ? Compression?
@orisabwatermd3398
@orisabwatermd3398 19 күн бұрын
Thank you so much for this review. I would have spent forever trying to figure out why this doesn’t work. 😮
@jesse.mccune
@jesse.mccune 18 күн бұрын
I'm glad it helped.
@timoliski5504
@timoliski5504 22 күн бұрын
WTAutomixer's main feature which is gain sharing automixer that turns down non-active microphone wasn't shown in this video. I would definitely not use WTAutomixer for a one mic. If there are two or more microphones in a same room, that's when WTAutomixer starts working really well.
@jesse.mccune
@jesse.mccune 22 күн бұрын
Thanks for the comment and I apologize for not having timestamps on this video. I demonstrate this at the 4 minute mark. I chose this example specifically because the two people are in the same room and it's exhibiting mic bleed. I could not find a setting anywhere that would actually turn down the non-active mic. It appears it is simply using a gate to achieve this instead of completely muting the non-active track. This allows louder bits of mic bleed to open up the gate. While this isn't much of an issue in use, it's no different than using a gate on each track. Perhaps I'm missing something, but going through the manual didn't help me figure out how to make it
@ilblues
@ilblues 23 күн бұрын
Thanks Jesse - you've got me rethinking my process. Converting a print blog to podcast it takes me about 60-90 minutes per 2 page / 10 minute article. Uploading a raw 2m file to Auphonic and processing another copy my usual way, Auphonic was better in respect of preserving the soft/trailing ends of words - like those that end in m, n, ing, etc. I typically use a noise gate as the 2nd step after editing the recording to script; set for -38 to -42dB to remove 95% of the breath noise, that clips some word ends. I haven't found the solution yet. But Auphonic didn't do that in my hearing. Other differences were subtle - Auphonic was a wee bit sharper at 3k, 5k, and 10k+ and lower at 4k over Accentize Dialogue Enhance processing. I haven't needed to use DeRoom any longer now that I'm recording in the clothes closet. I'm enough impressed with Auphonic to switch. Much as I'd like to boast of my audio editing abilities, at 70, simplicity and saving time for Judge Judy reruns is more important to me. ;^) Again, thanks for the eye opening. Jack in Sequim
@jesse.mccune
@jesse.mccune 23 күн бұрын
I'm happy the video helped you, Jack. This is a big reason I do videos like this. I think simplicity and saving time should be important at any age. While Judge Judy isn't my thing, I can think of any number of things that I can do with the time I save here and there. Given your project of converting blog posts to podcast episodes, the more you can automate that process the better.
@xyzmedia5161
@xyzmedia5161 23 күн бұрын
This is very true. I sometimes have to have conversations about this with people who buy editing services. They are often obsessed with nonsense and details for which the ROI is very low. You could do a million things, but should you?
@jesse.mccune
@jesse.mccune 23 күн бұрын
Too many people make decisions based on what they think people care about or what they like. It's one of the most difficult aspects of offering creative services to others. Obsessing on details that come and go in a fraction of a second makes me scratch my head. Often, the things clients obsess over are the things that don't matter, but they ignore the things that are causing people to stop listening. There's only so much we can do, though.
@xyzmedia5161
@xyzmedia5161 24 күн бұрын
For what it's worth, I haven't tested anything other than Audacity, Descript and Auphonic, but out of those, Auphonic seems to be the best by far. Studio Sound is very unreliable and I've noticed it really struggles with processing any sort of laughter
@MarcoRaaphorst
@MarcoRaaphorst 25 күн бұрын
Thanks for this. I found Auphonic best for all sources. Way better than the rest.
@jesse.mccune
@jesse.mccune 25 күн бұрын
I was surprised at how well it did on all the clips.
@MarcoRaaphorst
@MarcoRaaphorst 25 күн бұрын
@@jesse.mccune Me too. Even better than Hush Audio App.
@Tomaslav16
@Tomaslav16 26 күн бұрын
Здравствуй дорогой))) какой алгоритм включен в Accentize dxRevive Pro?
@jesse.mccune
@jesse.mccune 26 күн бұрын
These were all run using the Natural algorithm. I’m not a fan of the EQ that is applied in the other algorithms. I’m not sure how I forgot to mention that, so thank you for asking.
@ChrisPFuchs
@ChrisPFuchs 26 күн бұрын
I'm a big fan of Hush and Auphonic. One thing that stood out to me is how well Auphonic attenuated the Reverb on the Esses in example 2. If you listen close you can hear the Ess ring out on Hush whereas Auphonic removes it. It also dealt with the ticking sound of example 5 quite well. Hush errs a little more on the side of natural. VoiceEx of course is very impressive for realtime although it seems like it could use a dedicated DeReverb knob? And dxRevive is invaluable for certain problems for me. Thanks for the comparison, it's cool hearing them side by side!
@jesse.mccune
@jesse.mccune 26 күн бұрын
Thanks for the detailed response, Chris. I think it was you who mentioned in a response for a recent video that you liked Auphonic and it reminded me that I needed to include them in this shootout. My biggest issue with Hush is the inability to audition or preview how it will sound at the given setting. I don't want to render out an hour long track 2 or 3 times trying to find the best setting. I'm hoping others find this shootout useful because many of us don't have the time or patience to put something like this together. It's easy to quickly A-B test plugins, but not so much when it comes to the non-realtime options. My other goal is to help those who rely on the lower quality tools hear just how much better some of the other options sound, but I doubt it will have much impact on the Descript and Riverside editors.
@cherrysound
@cherrysound 27 күн бұрын
Thank you for this comparison!
@jesse.mccune
@jesse.mccune 27 күн бұрын
No problem. Did you have a favorite or one that surprised you at how well or how poorly it did?
@TopTierAudio
@TopTierAudio 27 күн бұрын
The one that surprised me was Auphonic. I didn’t expect it to perform that well in the audio repair game.
@TopTierAudio
@TopTierAudio 27 күн бұрын
I guess that’s the one that surprised you too 😂
@cherrysound
@cherrysound 26 күн бұрын
@@jesse.mccune I was not surprised at poorly working RX 11 Dialogue Isolate 😂 Small changes for big money upgrades - not a good way to promote their product)
@cherrysound
@cherrysound 26 күн бұрын
@@TopTierAudio Yes, Auphonic is really good
@GRYHND
@GRYHND 27 күн бұрын
dont know smart Gate or comp but is it a multichannel plugin as well? WT2 becomes attractive to handle multiple Mics in realtime. use it in multiple Chanels all of them appear in one window and it does in an easy visual way which live sound techs are used to. price is high I agree. cheers
@jesse.mccune
@jesse.mccune 27 күн бұрын
They are not multichannel plugins. We all have different needs. I can see the benefits of that during a live stream, but not really a deal breaker for me if I'm using them during post.
@Karam_Omar
@Karam_Omar Ай бұрын
great work and very important issue
@jesse.mccune
@jesse.mccune Ай бұрын
Thanks. I know it may not be an issue for many people, but for someone who prefers to work directly in my DAW, this is a deal breaker until they better optimize the plugins.
@Karam_Omar
@Karam_Omar Ай бұрын
@@jesse.mccune I really appreciate your effort and your wonderful content, continue to the top😍😍
@aftertheshowmoviepodcast
@aftertheshowmoviepodcast Ай бұрын
Been using this for a while for my 2 person podcast where we are in the same room, it works mostly but I still need to quiet one of the tracks sometimes
@jesse.mccune
@jesse.mccune Ай бұрын
We need a content aware smart gate for dialog. Something that can tell the difference between mic bleed and someone talking into a mic. That would be an instant buy for many of us. Adding in a leveler would make it what I hoped WTAutomixer was. Until then, we’ll have to deal with using gates, strip silence, and manually cutting dead air.
@aftertheshowmoviepodcast
@aftertheshowmoviepodcast Ай бұрын
@@jesse.mccune Agree, I have been manually stripping silence for years its the most time consuming part of my week.
@robainsworth4384
@robainsworth4384 Ай бұрын
Any thoughts on compariing with Waves Clarity Vx ?
@jesse.mccune
@jesse.mccune Ай бұрын
I don’t think the regular version of Clarity VX was particularly good when it came out. VX Pro was one of the best of the last generation of noise reduction plugins and performed well against the previous version of Dialog Isolate. It was a little more resonant when pushed harder with a little more obvious gating. However, a lot has changed about a year after its release. We started seeing this new generation of AI-powered noise and reverb reduction tools which perform better than VX Pro. If you only need gentle to, perhaps moderate noise reduction, VX Pro should get the job done well enough. If you’re dealing with heavier noise reduction, one of the new generation plugins will serve you better.
@robainsworth4384
@robainsworth4384 Ай бұрын
@@jesse.mccune Thanks for sharing
@estebanveron
@estebanveron Ай бұрын
I haven't try the studio yet, but the enhance tool has been so useful for me to improve the audio of my videos. I use to record in my cellphone or laptop directly...
@jesse.mccune
@jesse.mccune Ай бұрын
It’s good to hear there are people who are happy with the results they get from it.
@laez
@laez Ай бұрын
I found a couple of scenarios where RX11 DI was able to denoise quite a bit better than my other tools. I did not use the de-reverb. After I offline process with it I then move to logic for my standard workflow. Also, RX11 editor is far more stable than 10 was for me. I do agree that it’s not optimal if you make a mistake and want to change something. If they add native M support for the logic/rx duo setup it will be golden for me (useless in Rosetta mode).
@jesse.mccune
@jesse.mccune Ай бұрын
That is the thing with noise and reverb reduction tools, there's always situations where one will out perform another. It's why I recommend having multiple options available because there isn't one that excels at everything you can throw at it. What sorts of stability issues were you having with RX10's editor? I'm surprised to hear that RX needs to run in Rosetta mode if used within Logic. I would have figured all the bigger name developers were native by now. That shows how much RX work I do when using Logic.
@laez
@laez Ай бұрын
@@jesse.mccune it’s Logic that has to run in Rosetta if you want to use the spectral editor ARA. So, totally useless for now. Update coming this summer though they say. As far as stability, RX10 would crash regularly for me when making rapid edits - so, when running through and doing spot repairs very quickly, it would crash and just disappear. It recovers fine when you reopen it (nothing gets lost), but breaks my flow as I have to figure out where I was, etc. RX11 completely fixed this.
@jesse.mccune
@jesse.mccune Ай бұрын
@laez That makes sense. It sounds like the RX10 editor had some bugs, but instead of fixing them in the current version, they make you upgrade to the new version for the bug fixes. Either way, I'm glad your issues have been sorted out. There's nothing worse than software that crashes while you're editing. I use Hindenburg and the new version has been extremely buggy and I spent years asking them to simply store the playhead location when we save and during autosave, so when we re-open a project we don't have to try to remember where we were. They finally implemented that a few months ago and it's been so nice. Such a simple, standard feature in DAWs, but they couldn't seem to understand why it was important.
@laez
@laez Ай бұрын
@@jesse.mccune oh wow yeah totally agree. Yeah, I do wish they fixed the stability in 10 - seems like a flaw in their versioning - they should back update older versions with stability fixes, shouldn’t be behind a paywall.
@jesse.mccune
@jesse.mccune Ай бұрын
@laez Sadly, this seems to be par for the course when companies switch to an annual release cycle. As soon as they release they are focused on next year's release which will address some of the issues with the current release. For example, I can't see Izotope updating the CPU efficiency of Dialog Isolate or Repair Assistant in RX 11, but it will be something they use to promote RX 13.
@omerylmaz958
@omerylmaz958 Ай бұрын
hahaha I love it adobe voice destroyer
@jesse.mccune
@jesse.mccune Ай бұрын
I can find uses for most noise and reverb reduction tools, but that one changes the voice too much if there's too much noise. It makes them sound they sucked on some helium. It doesn't even know how to handle decent to good recordings. Case in point, I tested a recording my sister made that didn't have any noticeable noise and needed a little reverb reduction. The audio Adobe returned wasn't even recognizable as her. With these tools, the voice should still sound like the person after being processed, not an AI caricature of the person, so Voice Destroyer seemed like a good reference.
@omerylmaz958
@omerylmaz958 Ай бұрын
​@@jesse.mccune to be honest, this feature used to be better when it was first launched, but seeings things like these everything is changing to a post editor kind of thing. The human aspect will never go away. Its just the person with more skills will be ahead always
@jesse.mccune
@jesse.mccune Ай бұрын
@omerylmaz958 Agreed. I haven't tried it out since they moved it into their subscription package, but I noticed that each new algorithm seemed to be worse than the previous one. That was always a head scratcher to me. It leaves me wondering what they are training it on and who is testing it if they think the results are getting better.
@Tomaslav16
@Tomaslav16 Ай бұрын
Уважаю тебя за твою точность и въедливость в детали и тонкости технических процессов!!! ты пожалуй единственный в медиа пространстве кто так по мужски точен!!! Продолжай обязательно! Твоя информация и нам "Экономит кучу времени" при выборе инструментов работы со звуком)))
@jesse.mccune
@jesse.mccune Ай бұрын
Thank you for the kind words and I'm happy that you're finding my content helpful.
@ChrisPFuchs
@ChrisPFuchs Ай бұрын
Neural Network Denoisers are generally used as Audiosuite or on the Dialogue Bus for Post Audio. It's just too intensive to run on every Dialogue Track. Podcast Editing is perhaps a bit unique in that you need far less Dialogue Tracks so could expect to get away with it. I think the Real Time DX Isolate serves its purpose fine for Post Audio and does sound quite good, but the render time for the 'Best' is annoyingly long and I can see why not being able to run multiple instance of it is a big negative for Podcast. I personally offline render my heavy noise reduction; the two best Neural Network Denoisers in my opinion run off of dedicated AI accelerators and are only audiosuite or offline rendering at the moment.
@jesse.mccune
@jesse.mccune Ай бұрын
What industry do you work in? Podcast editing is definitely a unique situation when it comes to dialog editing, for many reasons. I can see why running tools like this as Audio Suite on buses would be the norm in many situations. What makes podcast editing different is in most cases, we're dealing with low paying clients with short turnaround in an industry facing a lot of downward pressure and commodification of our skillset. Between AI and sites like Fiverr and Upwork, clients are increasingly asking for more while spending less making efficiency the number one goal to remaining profitable. I'd approach things differently if I were working on a project that had higher standards than podcasting where good enough and affordable is all that matters. What are the two best Neural Network Denoisers you are referring to?
@ChrisPFuchs
@ChrisPFuchs Ай бұрын
@@jesse.mccune Hush Pro is generally regarded as the best Dialogue Denoiser in the industry at the moment and runs off of the Apple AI chips on their gpu. According to the creator, this allows orders of magnitude larger model sizes to be ran as opposed to cpu based Neural Denoisers and I think it shows. Audio companies like Izotope have to develope for a much wider range of systems however, as AI chips are really only found on a few Apple devices and Nvidia GPU's. This means running it off of the CPU. Auphonic is the other one; the 'Dynamic' Denoiser runs off of specialized hardware in the cloud. I know you're a little opposed to cloud based rendering, but the quality is good. It also has extremely quick render times and you can actually have multiple files being processed at the same time. I do Podcast editing regularily for a company, but also do other post audio work.
@jesse.mccune
@jesse.mccune Ай бұрын
@ChrisPFuchs Thanks for the response. I like Hush, but since I'm not a PT user, I don't have access to the Pro version. The biggest issue with the regular Hush is that there's no preview function or a way to render a small section to test the settings. It becomes more of an all-or-nothing endeavor where I guess on a setting, render and listen back. I've had a couple conversations with Ian and suggested this to him and he seems to think he can bring Hush into VST/AU format, but needs to figure out a couple things to allow for that. When you say you do podcast editing for a company, are you a contractor for an agency or were you hired by a company to edit their podcast?
@ChrisPFuchs
@ChrisPFuchs Ай бұрын
@@jesse.mccune Oh yeah, I can see how that'd be a deal breaker! Hopefully the realtime vst version is good. Ian seems like a cool dude. But to summarize my original comment about RX 11, it just seems like it was engineered within the constraints for Post Audio; a solid realtime denoiser that sits on the DX bus and a slightly 'better' offline algorithm that works well for Audiosuite processing. I think it's 'fine' in that sense, but as you point out, its weaknesses definitely show a little for Podcast when you have long files to render or tryimg to out use it on multiple dx tracks.
@BayanChacra
@BayanChacra Ай бұрын
Izotope are falling behind
@jesse.mccune
@jesse.mccune Ай бұрын
They are and the Dialog Isolate plugin feels like a “we can do that too” type move. Sort of going through the motions, but not really interested in doing it well.
@cosmingurau
@cosmingurau Ай бұрын
Wait, is that Jesse Plemons in the second comparison?!?!?
@jesse.mccune
@jesse.mccune Ай бұрын
It's not. I had to go back and listen to the clip, but I can hear the similarity.
@cosmingurau
@cosmingurau Ай бұрын
@@jesse.mccune Whoever he is, he can make a quick buck as a non-AI deepfake of Meth Damon's voice
@Флайчик
@Флайчик Ай бұрын
в дискорде не работает как пофиксить ?
@jesse.mccune
@jesse.mccune Ай бұрын
Supertone has a Discord server where you can get better help than I’d be able to offer.
@asmundma
@asmundma Ай бұрын
I agree fully. I also have Acoustica from Acon Digital and Spectral Layers. The last one will be updated in a week, so will be interesting to see their improvements.
@jesse.mccune
@jesse.mccune Ай бұрын
There is no shortage of options these days for dialog clean up tools, both as plugins or standalone apps. I may have to check out the new version of SpectraLayers when it’s released.
@abdalla_abdalla
@abdalla_abdalla Ай бұрын
Can I ask you what plugins are you using to denoise and dereverb dialogue.
@jesse.mccune
@jesse.mccune Ай бұрын
I primarily use Supertone Clear, Accentize dxRevive Pro and Cedar Audio VoicEx. Check out my video on Dialog Isolate to hear them compared to the new Dialog Isolate.
@abdalla_abdalla
@abdalla_abdalla Ай бұрын
I got better results (less artifacts) with RX 11 repair assistant then with dialogue isolate.
@jesse.mccune
@jesse.mccune Ай бұрын
If you did, it’s simply a matter of settings because the Repair Assistant is using Dialog Isolate under the hood. If you have RX Advanced you can open up the Repair Assistant’s module chain and see the settings it used.
@robnecronomicon1570
@robnecronomicon1570 Ай бұрын
Great video! Sad but true... Izotope were my go to for post production dialogue but I'm letting them go. The 'loyalty price' was double what it was from the other years. Which is very cynical, imo.
@jesse.mccune
@jesse.mccune Ай бұрын
Thanks. Annual upgrades like this only benefit the company. The focus turns into shipping new features instead of improving existing features and taking time to develop solid new features. My feeling is they missed their Fall 2023 release trying to get Dialog Isolate to work well enough as a plugin they could release it. Or maybe making it work in Repair Assistant. I’m not sure why I have it in my head that previous upgrades for RX Advanced were $299.
@mr_z_____
@mr_z_____ Ай бұрын
As podcast editors it's our job to make the episodes sound better. Leaving background noise in a podcast will make it sound amateurish -what radio show has background noise? The FX chain you describe is needed for bad recordings made with cheap mics in rooms with reverb, while properly made recordings with good mics take almost no work. So there are moments where a lot of work is needed and there are moments when you can leave the audio untouched.
@jesse.mccune
@jesse.mccune Ай бұрын
I agree. Most of what we deal with is not perfect. Even when I engineer the recording session for the client, I'm still stuck with the poor acoustics of their recording spaces. As video becomes more commonplace, attention to the background becomes more important. It's tougher to get people to record video in their closets. Our job is to process each recording depending on the needs of the audio.
@JulieKatt
@JulieKatt Ай бұрын
Thank you for your review! I am a brand new audio editor as I am making short instructional videos doing all the writing, recording audio, editing audio, recording video to the audio, then editing the video matched up with the audio and saving. I use Adobe Audition and Premiere Pro. This is THE FIRST PLUGIN I've ever purchased. I purchased it just a few days ago and finally had the time today to sit down and play with all of the RX11 Elements options. I was very surprised that when I followed the (not very good) video on the izotope website - it didn't give instructions, but more of an example of someone running it assuming we've all done this before, only to find out that when I clicked on 'learn' it said 'waiting for audio' when I'd already selected Control + A. So I played it and it was extremely distorted and choppy for my 3 minute video. I assumed it was user error since I am new. So then I closed out the repair assistant and sure enough, my audio was fine. That's when I did a search to find out what I was doing wrong and came across your video. As relieving as it is to know that I didn't do anything wrong, I am very disappointed that I may have wasted my money on my tight budget just starting out this project, that I cannot use this option. I don't understand a lot of the tone and sound settings, so I was really looking forward to this assistant making suggestions. I simply cannot use it either. 😞
@jesse.mccune
@jesse.mccune Ай бұрын
I'm sorry that experience is what led to finding this video. My wife tried using Supertone Clear as a plugin in Premiere to clean up some audio the other day and she had the same exact experience. I'm wondering if that is more of a Premiere issue than an Izotope one. Perhaps it's a mix of both. Or maybe Premiere can't handle more powerful plugins like these.
@ilblues
@ilblues Ай бұрын
Had a thought Jesse. Is it fair to compare broadcast radio medium with podcast? Seems like old school radio DJs were more entertainers than podcast hosts who are interview hosts. A radio DJs voice is his instrument and many I've heard want to capture every breath, 'nyuk, nyuk, nyuk' they utter, where as a podcast host is more information / education driven. Big difference between entertainment and education - silliness and seriousness ...
@jesse.mccune
@jesse.mccune Ай бұрын
I think some podcasts can be compared to radio shows. Narrative shows come to mind. But podcasting isn't only one thing with one use-case. I feel that is where the disconnect was coming from. This attitude that if it's not a highly produced radio style show like Twenty Thousand Hertz its not a podcast. Some people listen to or watch podcasts to learn something, or to be entertained, or hear this person or that person talk. Some people want to listen to stories, or they just have something they want to say. They're all valid forms of podcasting and we all have our own preferences. It doesn't make one style of production right and others wrong.
@thematthewbliss
@thematthewbliss Ай бұрын
You can overthink in podcast editing, but it's not with the processing. When we are using plugins to process the dialogue for cleanup and improvement, its possible to err on the wrong side of "clean" and change the sound enough to ruin it. This takes a practised ear with a processing chain to pickup when it happens, and is avoided. I'd class this as over-processing, though. I reckon "overthinking" is when we spend 30 seconds debating internally at each um or ah about whether it should stay or go, or whether a slight nostril breath caught outside a gate should stay or go. Avoiding spending more time on a section of audio than it occupies is a decent rule for me - any more, and its overthinking and overediting. My two cents: podcasting is different to radio, different to music... definitely its own thing! When someone says "ambiance" I think of how Billie Eilish records her tracks with different vocal tones in different mics, in different spaces... podcasting is way different. Information, delivered cleanly with a convenient expectation for the listener is the important part. When podcasters talk about "their art", I don't think they talk about audio processing (or a lack thereof) and how their traffic noise adds credibility to their content. There ARE things we need to be cautious of - I'd argue that opinions of people who aren't open to being wrong is one of them!
@jesse.mccune
@jesse.mccune Ай бұрын
Thank you for your well-written response. It's more coherent than anything I would have rambled on about. I completely agree with these points.
@ChrisPFuchs
@ChrisPFuchs Ай бұрын
Ha, this is an intersting one. Podcast Dialogue is really the only medium that has such heavy processing and acceptable NR artifacts. I find it really over-processed for my taste. I imagine this is where Radio-Dude was coming from. The best sounding podcasts (in my humble opinion) almost always have people recorded in a decent sounding room with some decent sounding mics, with tasteful EQ and Compression. And really, it doesn't take much to get that sound. My $100 dollar SM57 + A81WS sitting in an untreated bedroom sounds significantly better than 95% of the podcast dialogue I edit. I think this is where Radio-Dude is coming from. But the truth is, and I think anyone who does this work would agree- we almost never work with that quality of dialogue. We're also limited in time that we have to solve and clean issues. On top of that, Podcast Dialogue is expected to sound clean, consistent, and easy to listen to. Clean dialogue is valued over audio quality from the audience. Having a nuanced understanding of one's processing chain and workflow is a sign of a good editor, not overthinking.
@ilblues
@ilblues Ай бұрын
I go back and forth with an SM57 and SM58, into a Tascam DR-10X on a floor stand with shock mount. Simplest 'podcast' setup I could figure. I did add a Fethead and Xvive P1 phantom power unit between them to reduce the noise floor. Without, the DR-10X has to be set for Hi gain which has some audible hiss. Turns out my music room is really resonant, especially with a half dozen acoustic guitars out. They resonate a lot when I read a script. Threading socks between the strings and over the sound holes helped, but the room itself 'talks'. Since the gear set up is so portable, recording in the bedroom master closet works - the racks of clothes make for great damping and yield a whisper quiet recording. Not my idea; I heard somewhere Mike Rowe recorded his book The Way I Heard It in his walk-in closet.
@jesse.mccune
@jesse.mccune Ай бұрын
@ChrisPFuchs There's so much truth here. It's not difficult to get decent recordings with a little know-how. The problem is so many podcasters don't take the time to learn how to position their mic. I can get good recordings from anyone with a dynamic mic positioned well in any room. That becomes next to impossible if they're using a condenser. As you point out, as editors, we are at the mercy of our client's recording engineer, which is usually themselves. This ties our hands in what we can do without over-processing but delivering something listenable.
@jesse.mccune
@jesse.mccune Ай бұрын
@ilblues Even without the Fethead, removing the hiss is pretty trivial these days in terms of the damage done by the processing. I never thought of the impact of having a number of acoustic guitars in a room would have on the resonance in the room. It doesn't surprise me. I still have a resonance in my space somewhere around 700 Hz, but I've stopped worrying about it. And yes, walk-in closets are usually the best sounding room in a house. For anyone who isn't recording video, this should be the place to consider recording.
@ilblues
@ilblues Ай бұрын
Hi Jesse, what drives your process is the difference between the quality of the audio file your client provides to you, and their expectation for the final product. If you’re delivering what the client asked for, how is it that overthinking on your part? Perhaps it's better said that clients over expect? I launched my podcast less than 2 months ago. It amounts to me reading the blog I’ve published since ‘99. The learning curve wasn’t as steep for me as I’ve been a recording songwriter since the ‘70s. Episode 1 was the most time consuming; write and record an intro and outro; write and record the voice overlay, line them up and lock them into a master “make from” podcast episode skeleton. All I have to do per episode now is drop the processed voice file from the narration onto a track, line it up with the intro and outro, set levels and render. Including reading the 10 minute (on avg) scripts it’s taking 60-90 minutes per episode, depending on how many reading mistakes I made. The thing about “overthinking” - that’s not something you do with every episode. You learn, improve, streamline and apply best practices. What at first may have been overthinking, becomes a ‘no brainer’ with experience. Jack in Sequim.
@jesse.mccune
@jesse.mccune Ай бұрын
I asked him about what type of audio he's used to working with. He agreed that radio tends to work with much better recordings and that he wasn't really familiar with the quality of audio podcast editors deal with. Even with a company like Hindenburg, they seem a little out of touch. I reached out to tech support because I was getting glitchy audio when I was testing out the new Dialog Isolate plugin and when I told him my plugin chain, he asked "is the audio really so bad that you need all of that?" It was a pretty standard chain of noise/reverb reduction, EQ, compression, gate, and a loudness meter. In my world, that's a pretty standard chain, even with good audio. Unrelated: my wife and I love Sequim. We had considered moving there instead of Southern Washington. It's on our list of places to get back to now that we're back in the NW.
@ilblues
@ilblues Ай бұрын
@@jesse.mccune It's a great place if you love the outdoors. Lots of retirement aged folks. The area is real busy Memorial through Labor day with weekenders and vacationers here - much slower pace of life fall through early spring. The downtown area reminds me of Auburn where I'm from.
@rossbalch
@rossbalch Ай бұрын
I use a lot of the RX tools still. But dxRevive has replaced my source fixing needs. Usually I run revive first then other tools needed after.
@jesse.mccune
@jesse.mccune Ай бұрын
What kind of audio do you work with?
@rossbalch
@rossbalch Ай бұрын
@@jesse.mccune mostly podcast audio with less than techy co-hosts. Poor microphones, badly configured interfaces, bad recording environments. The usual.
@jesse.mccune
@jesse.mccune Ай бұрын
@rossbalch It's funny because I was talking to a software developer the other day who feels that podcast editors shouldn't have to deal with these types of recordings. The blame was placed on us editors for not teaching the client better, the clients for not caring, and the guests for not having adequate equipment or choosing the right place to record. This developer comes from a radio background and seems to not be in touch with the reality of what podcast editors face on a daily basis. He went as far as saying that I'm doing things wrong and overthinking things by applying noise reduction, reverb reduction, EQ, de-essing, compression, and Mouth De-click, De-plosive, and Voice De-noise as needed.
@infectropodo
@infectropodo Ай бұрын
Great comparison, this is very helpful! All of them perform very well. Do you work in RX for dialogue or in another daw with real-time processing?
@jesse.mccune
@jesse.mccune Ай бұрын
I'll work in my DAW whenever possible. For podcast editing I will only work in my DAW because clients don't pay enough for me to do manual work in the RX editor. If I'm working on a course or some other project that requires that level of detail, I have no problems diving into the RX editor.