Change after Invidious close it's API on Oct 1st
Invidious closed it's major instance on Oct 1st “https://github.com/iv-org/invidious/issues", this is the only API YToke~ uses to search for Youtube content.
The invidious site has changed to a redirect page that allows user to select instance and open the same page but hosted on other instances. Unfortunately we can't use main API as well.
So a hot fix will be to change the API end-point to some other instances API, this will give YToke a quick fix, to allow it to keep functioning, but as we don't know how stable those APIs are, there might be potential issues with only points to one API.
I think the right fix is to make the original API point to be also a redirect, or supply a list of available end points just like what the UI does. This I will need to contribute the original repo, I will see what I can do to contribute, this should be done and it could be me doing it.
Development Log – Lyrics
This all started from a karaoke session in my home, my friend pointed out that YToke~ needs its own lyric. You know some mv videos, likely the official mv videos has those very tiny lyrics font, and it only appears when during the exact time. This made karaoke against those videos nearly impossible (unless you can remember all lyrics), this made it necessary to have a built-in lyric view for YToke~.
Luckily, there's a public API available for lyric search “gecime.com”, which is great for my usage. So the design is, client makes the request to this api fetch lyrics as soon as the song starts to play. I did not do this on my backend because I want to increase the backend load and the api calls it will make.
I fetch the lyrics based on user's search because YToke~ won't know the name of the song, so sometimes the lyric is not accurate. To accompany this, I made that user can search on the lyrics view by providing the song name and singer name. User can also optionally chose hide or show the lyrics view which gives them max flexibility.
I think this makes a good improvement that people can sing along no matter what the video looks like (as long as they have a song playing).
Development Log – 09/27/2020
I found this repository https://github.com/tsurumeso/vocal-remover which does a great job separating vocal with the background music. I tested this locally by passing it a .mp3 file, it perfectly removed the vocal from the song.
I'd like to start with this library and try to integrate with my app. Now, the tricky part is how to do it, it still needs more investigation.
Development Log – 09/17/2020
This week's progress makes a 2.0 milestone for YToke. The YToke client app now points to it's dedicated endpoint service hosted on Google App Engine. Here's a break down of the backend:
The backend is written in Java, basically a maven project. The project can be build and test on local environment, Google App Engine could just build and deploy the project in one line of code.
The backend searches video from Invidious API (will be closed on Oct 1st, alternative has to be found prior), then video statistics will be fetched from a noSQL database, then combined data will send to client app.
Client YToke app will send video playback data and user selected video tags back to the backend to build up database for video statistics.
I think now the whole Client-Server system is pretty much finished initial setup, with a few leftover task to do. Afterwards I think I could start research human voice removal and other interesting feature, let's stay tuned.
Development Log – 09/11/2020
I made great progress this week, I have the initial YToke backend setup on Google Cloud! The initial setup support video search, video tag and played/finished counts. Having this dedicated backend gave YToke great flexibility, such as no need to update client code to handle Invidious API change, support of tag and statistics and so on...
One thing I had experienced during this week is the good usability of Google Cloud compared to AWS. I did some AWS work back when I was at ICF company, it took endless of effort to figure out the project setup. While with Google Cloud, majority of project setup was done by default, developer can focus on the code and their backend logic. It is more like AWS wants to replicate every physical things, but Google Cloud (AppEngine) wants to provide a server less solution to hold your backend, it's a complete different experience. You could say G-Cloud has less flexibility, but the truth is, it only take me one week to setup my backend, this could never be done if I was using AWS.
Well, this is a good start, I want to switch client to this dedicated backend and provide more statistics features in the near future. Also I'm going to open source the backend code as well.
Development Log 09/04/2020
I could not figure out AVAudioEngine with aggregate device... after a week’s struggle. The thing is, I created aggregate device but audio engine does not want to stream after device switch. I tried release the audio engine then create a new one, some random error will show up. AudioKit also does no better than that.
This API is so hard to use also poorly documented, no good example I could find. Anyway, I can only do single device switch in the app, but that seems good enough for now.
I want to start backend work as soon as possible, which will be next week. I tested out Google Cloud and it is amazing, this is a totally different experience than AWS.
It seems super easy to deploy a backend with it’s built-in Eclipse IDE and terminal, without considering anything regarding to deployment which is what I want to avoid.
I think this will be the route to go for now, a simple backend on Google Cloud, attached a noSql database. Let’s see how it work out.
CoreAudio Struggle – Development Log 09/01/2020
I'm struggling to create aggregate audio input device and then connect it to AVAudioEngine. This API is low level, old and Apple does not have much discussions on it. I guess I will just have to investigate by myself.
What is good is, after I'm done with it, I could warp my code into a SPM/CocoaPod library and publish, which will be very helpful for Mac developers.
Development Log – 08/29/2020
We really have to test out the permission stuff on MacOS and iOS. Somehow the production app does not have microphone permission, so the microphone does not work until you un-toggle and toggle it in permission settings.
It is a shame for an iOS engineer to not take permission into account, privacy is one thing every iOS should keep in mind.
I plan to modularize the permission code, and add a portal under “mixer” tab that can take user to the settings.
Apps written 5 years ago...
Since I have my developer account renewed, why not put my iOS apps and games back on the AppStore?
They were written in Swift ½, so the first thing needed is a conversion to Swift 5 and re-compile. This basically takes me a whole day to fix grammar changes and everything. I have to test out the app itself, iAd, social sharing...
The code I wrote in 2016 is totally shit! I put everything in ViewController, a ViewController in my game “2357P” is almost 1000 lines. I can't believe I wrote that code and still thought I was so good back then.
But anyway... I may not work on the KTV app until I put these iOS apps up on AppStore, this may take 2 days if everything went well.
Development Log – 08/25/2020
I finally figured out how auto-update with Sparkle works. The “annoying” part is to sign the dmg, you have to code sign every dmg and maintain a xml file somewhere with correct keys. I’m sure there are security reasons they do this, such as the needs to verify new updates comes from the same project, same developer. Anyway, I figured out, a new release with this functionality is published.
With my developer account being re-enabled, I wonder if I should put my previous iOS apps back up to the store, they needs some testing since they were written 5 years ago.