Hosting a Mastodon

Hosting a Mastodon
Crossed Wires
Hosting a Mastodon

May 03 2024 | 01:06:16

/
Episode 76 May 03, 2024 01:06:16

Hosted By

James Bilsbrough Jae Bloom

Show Notes

As centralised social media platforms become more and more hostile, with not only toxicity from undesirable elements but also massive privacy concerns, the Fediverse has become somewhat of a refuge for those who want to share freely and openly, without the worry of appeasing the almighty algorithms.

There are some fantastic Mastodon issues for you to join, but what if you want a bit more control, or want to build your own community, or represent your brand? You could setup your own server, assuming you have the skills and patience to do so, or you could take advantage of a service like Masto.host.

Our guest, Hugo, provides what he describes as 'Shared Hosting for Mastodon' at incredibly affordable prices and with some much appreciated transparency on infrastructure and update policies.

We talk about how things got started, his encounter with TERFs and the freedom to refuse service to anyone because of the decentralised nature of the fediverse. We also do a bit of a deep dive into the infrastructure setup for Masto.host and the measures Hugo has put in place in the aim to give everyone a great experience.

Do you have your own instance, or maybe have some stories to share of instances gone wrong? We'd love to hear from you, so please send us a note to [email protected], or why not come join the discussion on our Discord server.

If you liked this episode or any of our content, we’d greatly appreciate any little bit of support you can throw our way over at our Ko-Fi page.

Affiliate Promotion

"Don't you people backup? I backup, and I don't even know what backing up means!" - Nicola Murray, Secretary of State for the Department of Social Affairs and Citizenship

If you have any kind of file that's important to you, be that a treasured family photo, your latest research paper, or just the list of the co-ordinates of your best places in Minecraft, you'll want to make sure it's kept safe, right? Well, just syncing that to the cloud isn't really enough, you need a proper backup strategy too.

Part of a good backup strategy is having a backup that isn't in the same place as your computer, and this is where a good cloud backup service is so important. Our friends at Backblaze provide simple, reliable, and affordable backup options for your Mac or Windows PCs for just $9/month. You can get a 15 day free-trial when you follow this link to sign up.

Episode Links

Chapter Times

  1. 00:00:04: Introductions
  2. 00:05:57: Shared Hosting for Madtodon
  3. 00:11:26: Ditching Twitter
  4. 00:14:43: Turfing out TERFs
  5. 00:22:21: Why Have Your Own Instance?
  6. 00:33:59: Infrastructure
  7. 00:50:07: Masto.host Pros & Cons
  8. 00:58:50: Wrapping Up

Credits

Intro and outro theme: Ace of Clubs by RoccoW

View Full Transcript

Episode Transcript

[00:00:05] Speaker A: Hello and welcome back to Crosswires. So this is Jay and I'm currently. Hey, is there anybody else in this apartment right now? [00:00:17] Speaker B: Yeah, there's someone making quite a bit of noise in my apartment. Just 1 second, folks. [00:00:21] Speaker C: Hold on. [00:00:22] Speaker A: I can't hear you, James. I'm hearing somebody else hear. I'm hearing a lot of noise. [00:00:25] Speaker C: Can you keep it quiet over there? [00:00:26] Speaker A: No, I'm currently sitting in the other room in the lounge at James's apartment because I'm over in England right now. So how are you doing, James? [00:00:34] Speaker B: I am good. We've endured another big couples test this week. Putting together Ikea furniture without killing each other. [00:00:45] Speaker C: Yes. [00:00:45] Speaker A: And then rearranging everything to put stuff out of the one room to tear everything down, put it back in. [00:00:52] Speaker B: I'm just gonna say I'm really glad we did not have a fire or disaster. While literally, folks, just as a fun preamble to the episode, we have had at one point, no way to get out of the flat other than jumping out of one of the windows, potentially. So maybe better planning on our partners. Anyway, Jay, let's introduce. I can't speak today at all. I've not had enough coffee gone. Jay, over to you. [00:01:22] Speaker A: I think our guest today is one of the people behind a social network. So I think one of those social networks out there, is it Zuckerberg of Meta? No. [00:01:33] Speaker B: No. [00:01:35] Speaker A: Is it elon musk of x? No. [00:01:38] Speaker B: Hope not. [00:01:39] Speaker C: No. [00:01:40] Speaker A: I think it is the amazing Hugo of Mastod. Is it Masto host or Mastod host? [00:01:46] Speaker C: Mastodon host. [00:01:48] Speaker A: Yeah. Which is the instance that we actually host our crosswires instance on and have absolutely been loving it. Like Hugo, you do an amazing job. You are always very, very responsive and all that. And it's been up compared to our second. Our first mastodon instance we were on for crosswires. It went down for a bit and we were like, oh no, what happened? It came back up again and we found out that it was run by this far. [00:02:17] Speaker C: Right. [00:02:20] Speaker A: Then we moved to Mastodon Sho, which is good. We were like, you know what, we want to just have some extra safety and all that. So, Hugo, can you give us a little bit about your background? [00:02:33] Speaker C: Yeah, so I've been doing web development since 2000, the beginning of 2000, so it's been a long time. And it was mainly in the beginning, it was really website development, stuff like that. Then I started a couple of projects that ended up being okay and then developing into companies. I was in the process of thinking what I was going to do next when I ran into Mastodon and then it took over from there. It's like it was really a coincidence. I have also a hosting service in Portugal, just regular WordPress and PHP cpanel, the regular hosting services for the portuguese market. I was just holding on because the old provider that my website hosted just disappeared from the face of the earth almost. Now, I know we, and. But he could not continue to host the websites, and so I just migrated everybody to a new service that I created, these clients, my websites and stuff. We reached an agreement and so I took over from him. But it's not really something that I was really engaged and that I wanted to develop any further. And so, yeah, I ran into Mastodon. I thought the tech was really cool. Like, I was really curious how it worked on the back end and I tried to install it. I struggled a lot. Like, I spent like a day installing it and I thought, oh my God, if a person like me that has some experience takes so long to install something like this, most people will give up. So I just create an HTML page. I put it up and said, if somebody wants me to install it and to host it for you, just reach out and, yeah, and like, in a couple of months I had 100 people contact me and I was hosting like 100 master instances. And so it's, that was like 2017. So, yeah, since then it grew and grew and, yeah, now it's like what I do almost full time. [00:05:24] Speaker B: And so this is before the mass exodus from Twitter. This is the mass x. Oh, dear. Did I just make that sound like one of your exodus? Aren't you the mass exodus at this point? You know, the fediverse in the hall is growing, mastodon is growing, but it's not had that. I hate the phrase, but that come to Jesus moment where people are flocking to it because of the state of x, the state of Twitter. So quick question, man. Obviously when you started, you know, it's really interesting to hear, you know, that you were hosting regular WordPress based websites and it's Red cpanel. You know, I've done, you know, I've done reselling of that stuff many, many years ago. The stack on a mastodon instance must be so different to, for want of a better word, the lamp, I guess, is probably the lamp stack. How much of a learning, I mean, you've hinted about as a learning curve, how much of a learning curve was it to set up that sort of client focused hosting environment for Mastodon? [00:06:37] Speaker C: It was like, first of all, it's for you to have an idea. You are talking about the Twitter exit and it's like, it's way before that. Like some people left Twitter by then, but it's. Mastodon was not even one year old. And right now there is only Mastodon. And then there are other softwares on the fediverse, but only Mastodon probably has 12,000 servers at the time. It had like, I don't know, 200, something like that when I started. So it's like completely different dimension. But what you were asking, it's. I had some experience with both on the development side of software, but not a lot of experience regarding hosting and servers and stuff. So it was like I needed to create something or I started to learn, I started to investigate. Some people in the community helped me, but it was the main problem is like nobody had done what I was trying to do. Like nobody had offered shared hosting for master host, meaning the same server was hosting multiple instances. There was no information, like what could break? Like where, where, what limits should we look out for? Like could ten instances with 1000 users each be together? What would break if that happened? Like what was the bottlenecks, etcetera? Because it's different to have like one instance with 10,000 users than having ten instances with 1000. It's like the resource needs are different. And so I was the guinea pig for that. I was the one that had to. Okay, let me try with five instances. What happens? It's all good. Okay, let it run for a week. Let's put ten and see what happens and then carry on from that and test like, okay, do I need more ram? Do I need more cpu? Do I need, like, do I need to separate the mastodon has basically three layers, three services, so it's the background jobs for HTTP requests and then the streaming. The streaming is pretty much armless because it doesn't require a lot of resources. But the other two, should I separate them? Should I not? How would the databases be hosted? Is it better to have it on different servers? So everything like that was something that I needed to test in production and. Yeah, and so it was this way that I approached. It was not like creating a test environment and stuff like that because it's really hard to simulate live traffic since, for instance, I don't know if you have a server that is federating with 2000 other servers to simulate that in a testing environment, it's really tough unless you build such a large infrastructure that it's like you spend more time on that than properly developing. So it was really live testing. It's like oops did I break something? No. Okay, so it's good. Let's carry on. And if I did, let's move back and. Yeah, but I was like, I didn't shoot myself in the foot very often, and so I managed to reach to a balance that now I am comfortable with. But there are still many things that I know I should improve and change. But now it's like, as the project is me alone, there is no. Nobody else doing it. I'm the cleaning lady, the support guy, the guy that does communication, the guy that maintains the servers. So it's very slow, my development. [00:11:26] Speaker A: So was there some type of emphasis that made you want to leave Twitter, for instance? One of the things that made me want to leave Twitter was I kept getting attacked because of my identity and stuff like that. Was there something that, like, how active were you on, like, Twitter or some of the other networks? And what. What led you to throw away the. At the time, the popularity that they had for. For a up and coming network, like. Like Macedon? [00:12:00] Speaker C: I was. I'm not a big social media guy. Like, I don't use social media much. Like, I have accounts on probably everything. Even today, I keep my Twitter account. I deleted all my posts, but it's there. When I went to lurk, I can go there and lurk. It's really rare these days, but if I need to look something up, and now it's completely closed. If you don't have an account, you don't see much of the content. But even when it wasn't, I still left the account. So I'm not. I was not back then, and I'm still not, like, somebody that posts frequently and that engages a lot on social media. And so it wasn't because of that for me. What drove me to master then was the technology was the curiosity. How does this thing work? And this is really fun. This is really curious. And so that was, like, my driving force. It's like, when I joined Mastodon, like, in 2017, like, the LGBTQ community was, like, the largest. Like, maybe a lot of people that were also into fun and stuff like that from Japan, but, like, from Europe, most people were from the LGBTQ community. And so it was. Yeah, it was for me that I honestly. I understand that. Like, I always been. It's something that I haven't experienced. The attacks that the community has felt was also a learning curve for me, personally because there was a lot of topics I didn't know about. Like, I didn't know what the turf was. And so somebody, when they came to me and asked me, like, oh, we are a community that we want to defend women because we want to. I don't know how they put it, because I think that it's important to see, to have a different point of view in terms of. Of how women are being exposed to stuff. And I was like, are you okay with us having an instance with. You said, yeah, fine. It's like, from what you described, it's all good. And then I realized what the turf was, and I had to ask them to leave. [00:14:44] Speaker B: So that leads us onto a really good jay, you probably slam your show notes, but Hugo's just really led us into this perfect. [00:14:52] Speaker A: Just for anybody listening, any of our listeners. Terf stands for trans exclusionary, radical feminist, basically, somebody who's for feminism but wants to not include trans women in the discussion. So siad. That was a perfect segue. Sia. [00:15:13] Speaker B: Well, you see, and I had to learn that as well, because when people talk about turf, I'm thinking, why is everybody really upset about the grass you get on football pitches? Sorry, but you. And you just mentioned, you go, and it leads us into his perfect question. There's so much more Matech and stuff I want to, and I know me and Jay want to ask you about, but moderation and how you decide who is, I guess the best word is a good fit. Who fits with your values for coming on board on Masterhut House, because the idea is. So you're providing a master instance, and we sort of wires. Social instance is a master host instance, but you provide the instance. You handle all the server side stuff, all the updates, everything like that. We don't have to stress about that. But in terms of which groups or who you allow onto a platform, that must be an incredibly tough decision. What do you base your rules on? I mean, what happens, for example, let's say that. I'll give a scenario. Let's say that somehow a neo nazi group had slick proving it. They pretended to be all lovely people, and they got through. They got mastered at host, and then we started. Their instance was frequently involved in attacks, and it was obviously on your platform. What's your stance? How do you deal with that? What's your policy? I'm probably not making much sense, but. [00:16:42] Speaker C: Yeah, I understand what you're asking. So first, let's make things clear. Like, I don't do moderation. My moderation is like, it's okay, or, please move your instance somewhere else. This is the good thing about Mastodon and the fediverse in general is like, I will not shut anybody up. It's like, okay, fine, I don't want to host you. Here's your data. And you can host somewhere else. So it's different from moderation because it's, yeah, I'm not deleting their posts or blocking users or stuff like that. That's got you. Yeah. That's the difference that I would like to make first. But the good thing about being only me is like, I can decide like, I have terms of service, but if something new comes up, I just update the terms of service. I contact the person and say, look, this is happening. I'm not comfortable. Please, here's your data. Can you please move to a different instance? If it's something that is not, like, really is something that, it's gray area, but I don't feel comfortable, I'll leave the instance up. If it's something that I'm really discomfortable, I'll just email them and say, okay, I had to stop the service. Please migrate. Here's your data. But as it's me, if it was like a company, and I don't know, if I had shareholders and stuff like that, I would have to approach things in a completely different way. As it's me, it's like I can make a decision really fast and act on it really fast. And deep down, how I make the decision is like, it's my, it's a personal barometer, if I can call it that, because it's not something that many things I didn't even consider or that it would be like, for instance, a recent one I had was something that is true crime community. It's something that exists a lot on Tumblr or that it did exist a lot on Tumblr. I don't know. I did some searching around the topic when I ran into it. That is like community. It's a community that, I don't know, it's like they are fans of mass murderers. Like people that go and shoot schools and kids and people that commit mass murders. They go there and they, I don't know, things like, oh, I miss you so much, you were right. And people don't understand, blah, blah, blah, blah. And it's like, it's something that I'm, I couldn't, like with photographs of them with machine guns and stuff like that. It's like, no, I don't want to host this. Yeah, so it's, it's something that I didn't even dream that it could exist. And then, yeah, they found the way up to me. But, but there is a curious thing that it's like, this is not the case in this example that I. That I provided, but when somebody emails me and say, look, I have this instance, I wanted to check with you first before I install it, because if it fits your terms of service, it's like it's a red flag. It's like my experience, if they are contacting me, let me leave this stuff open and keep on checking to see what's going on there. Usually that's a good sign that they know that they are doing something that, yeah, that is on a gray area. [00:20:53] Speaker B: That makes a lot of sense. Jay, obviously, as someone who does that, I mean, to me, that gives me a lot of content. Sorry. A lot of confidence in knowing that we're working with a good service where Hugo's taking a personal interest to make sure the rest of the instances on our shared hosting, that's kind of what that is what Mastodon host is. Does that as a trans woman, as someone who is marginalized, does that give you a lot more confidence in the service? [00:21:23] Speaker C: Absolutely. [00:21:23] Speaker A: And I think one thing I want to just touch base for, I know we had a Mastodon episode a while back, but I think Hugo touched on one of the best aspects of Mastodon in the fedaverse that say someone does not fit what, what, what you want to have Hugo. I love the fact that you just say, hey, here's your data, you can go elsewhere. That, like, unlike, for instance, you and I were on X and when I. And even like, when I was on Facebook and I decided I don't want to be here anymore, I deleted that and it was gone. I mean, and like, or like, say, like, like x, say must didn't want me on the. On my. On there and he, and he deleted my account. It's gone. Whereas with Mastodon, you can basically say, hey, hey, you can still be on the fedaverse, you just can't be here. And that's the beauty that everyone has different instances that all connect each other. I think that's probably one of the best reasons to make your own instance, is because if you don't trust your instance makers or your instance host or you want to have that assurance, like for us, I know we wanted to have the assurance of when we put this stuff out there that we're not going to run afoul of their brand guidelines. Because I know some instances don't like people promoting podcasts or streams or merchandise. That's why I think one of the reasons why we wanted to make our own was to be able to say, hey, we're going to be putting out quality content but we want, but we don't want to run afoul of the instances, not like in brand stuff. Also we wanted to have that brand marketing. But I'm with you go. I was looking at all the install stuff for masters on, I'm looking at all this like the hosting and stuff and I'm looking at the, in my head was spinning. I mean James, are you in the same way with your head spinning? [00:23:20] Speaker B: Oh gosh, yeah. I mean it's like because I think that's honestly what held us back and I think it's something Hugo alluded to earlier is, you know, it's not an easy install. And unfortunately, and just unfortunately it does seem that that's very common with a lot of fediverse. I mean we looked at a peer tube instance and like I have a bit of experience with setting up, you know, I can install next cloud instance. [00:23:44] Speaker A: Fine, you know, and I've done WordPress so many times but like both of those have had the, the so many, so much work on their install. But yeah, I think Mastodon and I think that's probably one of the struggling points for people getting on the mastodon is what instance do you choose? And then two if you want to make your own instance, how do you start it? Which is why Hugo, your masterclass host was actually one of the first things that I was recommended to when I joined Mastodon and I kept a bookmark to Mastodon host and then I think it was August or December of last year. Message James. [00:24:24] Speaker B: James, we're doing it. Yeah, we're making so, and I want to talk, I want to talk on this a little bit, this whole concept of, you know, because we talked a little bit about the scale of scalability. Like, you know, obviously we, the resources, the limits of a mastodon server are very fluid. Now Hugo, your price plans are really very, very flexible. Now just to give if you are. And for us, I mean look, we'll be transparent which plant we're on the moon plan and the moon plan is $6 a month plus tax. Now you've got quick question on that one because you scale these, you're very clear on comparison. So we go on the first thing is the Federation capacity, the number of processing threads, the size of a database, media storage and an estimate. And I'll come back to that on the active users. But what is it that, how do you determine sort of what? Because obviously there's no specs in terms of oh, you've got a quad core xeon, this much ram, this much you take care of all that. Are these estimates? I mean, how do we get to these sort of plan sizes? Is this from experience? And how does the Federation capacity impact what people's experience, I guess is my question. [00:25:42] Speaker C: First of all, there is no, no Ram or cpu limits on master host. The reason I did that is because, for instance, if you want to upload a video, let's say that I gave you alpha gig of ram and one virtual cpu to your instance on the moon plan. That is $6 per month plus tax. If I did that, every time you uploaded a video, it will take you a long time. It would probably even time out because there was not enough ram to transcode the video or something like that. And every time that somebody, every time one of your followers posts a video on a different instance and it gets cached on your instance, the transcoding also needs to happen because everything is stored locally in a cache. So this would be like a massive problem, like your background processes would hang? Yeah, because it was waiting for the video to process and stuff like that. And so that was a decision I made. And so my limits are on the number of processes that you can have running. Basically, I assume that there is like a balance between, if I have 20 instances on the same server, one will be like not doing much, the other one will be transcoding a video, the other one will be doing something different. I have some stuff that is specific to video transcoding where I better optimize the server resources. But still the logic is the same. When you have a pro, let's say in your plan, you have two processes to process, two threads that you can use, two processing threads that you can use. So what will happen is like if you run out of processing threads, everything will start to queue up to wait for if it's a processing thread in the web front, like the loading of the page will take longer. If it's in the background, it will process one job and then the next one, and then the next one, and the other ones need to wait and they are queued up. So in practice, what you will notice if you are running out of processing threads, it's a either your page load will be slower or b your federation will be slower. Meaning that you can only see a post that was posted like 30 minutes ago, only now shows up in your feed. And the same the other way, like a post that you make now, it will take longer to reach other servers. So that is mainly in. Yeah, obviously in the extreme play, in extreme cases, if you keep on pushing it, it will stop responding. [00:29:01] Speaker B: Okay. So realistically, the plan size determines how much, you know, how much federation can happen at one time, how much processing can happen in what time, and effectively the load on my server, and I guess, is it fair to say that's where the estimate of the active users, the number of users on each of the hosted instances comes from? So for example, saying five. And in terms of media storage and database storage, I mean, obviously, as with any storage, I assume there is a fair amount of cleanup that needs to be done. Is that, that's all automated, I take it? In terms of clearing caches and clearing old storage off of. [00:29:42] Speaker C: Yeah, let me answer. You gave me three questions. Yeah, let me go over them. [00:29:50] Speaker B: Yeah, of course. [00:29:52] Speaker C: So it's like the estimated users, it's an estimation on average across the mastodon instances that I host. If your users is similar, you will have this. You can expect everything to run in terms of federation, in terms of page loads, etcetera. But this is not to say that you can expect like the, the database space to be enough for five users, because it will greatly depend if you have a user that every five minutes makes a post is completely different than if you have a user that makes a post every five days. And the same thing regarding federation, it's completely different if you are following, I don't know, 10,000 remote users, or if you are following 100 remote users, everything is cached locally. If somebody, if, let's say you start a new server, the server is empty, you create an account for yourself and you follow somebody on your remote server. From that moment on, all posts from the remote user get copied to your database. If they include media, they are cached. That media is also copied to your media storage. This is like the, this will happen in the database. As you can imagine, it will grow over time because it's like if they make it the thousand posts since you started following, you have 1000 posts from them in the database. And this is your follower. It's not even your user, but you are hosting a copy of their data. So you are also using your database space. Regarding that, it's up to the admin to decide if they want to delete this cache data or not. And they can do that in their administration interface by setting content retention period. Meaning they will say, okay, I don't want to host remote data longer than two years or five years or, or I won't. But by default it's not deleted. This has some downfalls when you set that up. I can talk about that later. But yeah, so in terms of the database, it's always growing. Even like you start an instance, you follow 100 people, 100 remote users, it's just you on the instance and you never go back to it again. Your database is growing because Federation keeps on happening. It's like you are still caching all the remote data. It's something that people don't realize many times like they contact me, I'm not even using it. How can I be hitting the limits? And this is the explanation in terms of media. I have an automated cache cleaning of the media that is built in and I have it. You also can set media content retention, but it's automated for everyone on Master host because Masterdon has ways to refetch anything that has been deleted in case you scroll to older posts. [00:33:26] Speaker B: Got you. Okay. Which is good because again Mastodon has got those buzz tools and it's fascinating. Again, this is so different to my own experience. And Jay, probably your own experience. Obviously we have a WordPress instance hosted with Linode or Akamai as they are now. And it's really interesting. Jake, do you want to ask some more questions? I know we've got quite a few things that we want to just touch on also, obviously we don't want to take up too much of you guys time. This is a really interesting discussion. [00:34:00] Speaker A: So from what I've hearing, you built the hosting of infrastructure. When you update Mastun instances, are they all individually hosted or is there like a. One of the thoughts I'm thinking about is like WordPress multi, multi user where you have like one WordPress installation across a lot of sites. What is the infrastructure for Master host like for you? As much as you want to share? [00:34:31] Speaker C: Yeah, yeah, yeah, it's fine. Everything is public. It's like if you go to master host slash infrastructure you will see it and at least the concept that I created. So the first to separate, obviously the software does completely different things, but in terms of WordPress you have like PHP, MySQL and Apache basically that's it. You have like these three and everything is running there. In terms of master them there is more, there are like two databases. It's the postgres SQL database, it's the redis database, it's Ruby, it's node. Yeah, and it's Nginx. So it's a bit more. And this is what like lamp installations have been around for a really long time. An installation of Ruby of node. Usually they are separate like somebody that needs to install node doesn't need to install Ruby on the same machine or something like that. So there is not much information or there is not a one click installation. Easy way to do that. While lamP, there is a lot of resources regarding that and even the WordPress installer takes a lot of the work out of your hands. Back to your question, every instance is separate except for the database cluster. The database cluster. I have a cluster for multiple databases. Yeah. It was like the way that I found that it would make more sense because just to start, the cluster of postgres would use almost as much ram as plan like yours would need. So it would be counterproductive. Well, if I had like 100 databases on a single cluster, the ram needs and cpu needs would be almost negligible. And I can use that cpu and that ram to go to the processing that is required. And so I don't know if you want me to go over all this, but it's like I can go over, happily go over it, but I can technically explain how I build the infrastructure. [00:37:36] Speaker B: I'm looking at the diagram now. So the shared, you know, the shared infrastructure, the shared, sorry, start again. The shared host infrastructure diagram. Now this is, you know, as Hugo said, this is all very open. This is on, you know, accessible from a front page. Now what fascinates me, am I understanding correctly? Hugh Gov. And so what you were saying just now is that each, so our wires social is, I'll let you explain this, but is one of these app servers, and then each instant, each mastered on instances what one of these unique app service, is that correct? [00:38:10] Speaker C: No, that's not. That app server hosts many installations. Docker containers. Okay, Docker containers, where only the master code is running the data, is a layer behind. It's like I have NginX on top, on the first layer on the diagram. That's how it looks. It's like NginX is the first layer. Then there is a private network that connects it to the servers where Docker is running the code. And then there is another layer that is even below. There is also another private lan that connects to the databases. This way if somebody could, for instance, infiltrate and access the front end, that is the end that is exposed to the public. Basically they could not communicate with the database layer. Like it would be like it's an extra, an extra layer of security. But it's something that I, that I built at the time because I thought it was interesting to do in terms of security and also in terms of performance, because this way I can really optimize the servers that host the databases to run only postgres. SQL code, for instance. [00:39:50] Speaker B: Got you. [00:39:51] Speaker C: And this way it's like I can tweak the configuration of the server in a way that is not optimal to run ruby, for instance, every instance is a docker container inside one of the app servers. [00:40:11] Speaker B: Gotcha. That makes sense. Yeah, that makes sense and massive to just, I love that concept of again, dedicated, using dedicated servers for dedicated purposes so that you, as you said, optimize because you know what's running on the server. So that is more favorable to be a dedicated database service. It doesn't have some of the things. It wouldn't have Ruby on there, for example. It would have just postgres and over tuning done there. And again, that model. Again, if someone were to breach the front end, they can't get down to the next layers. No one's going to get into the docker containers from breaching the front end. It's a, well for, you know, from a security standpoint it is a well thought. And obviously, you know, then external object storage. One big thing of course from a privacy standpoint is it's all hosted in France, in the EU, which means the, we have, you have the european data protection laws which aren't as stringent in the US, unfortunately. So that's definitely a benefit. Yeah. Jake, I think you wanted to ask something. [00:41:19] Speaker C: Yeah. [00:41:19] Speaker A: So I've got a kind of a, it's a twofold question, but it's from a similar idea. So how do you handle spikes? Not like, like constant spikes, but say either a post goes viral or someone starts getting, starts getting popular. But like, you know, I do like to say spike in popularity or I've also seen this DDoS attacks. How does this infrastructure handle with those? Because I've been the, even on x, I've had a few posts go viral for like maybe like a day where he's getting like bombarded with stuff and then it dies off. [00:42:07] Speaker C: Yeah. First, like I'm pretty sure that if somebody really, really, really was really good and really, really wanted, they could find like holes in the infrastructure is not like I'm saying that. Yeah, yeah, I got it all sorted out. It's like it's a project of one. Yeah. But up until now, knock on wood, nothing major has happened. The great thing about also hosting the data of Mastodon is like 99% is public. So it's like, it's not like you, even if you could get access to the data, it's not like there is credit card information or health information there. So yeah, it's something that leaves me a bit more at ease. To your question, the way I managed to handle spikes in terms of what happens more on my end is like when there is spikes in traffic, not so much like a particular instance or post or a user going viral. This will be like, this will happen where the instance will slow down because they don't have enough threats to handle the requests and so they can upgrade their plan and stuff like that. But it's like, it's pretty safe in terms of a single instance being really popular at the moment. It doesn't impact my infrastructure much. What happens is like, for instance, during the exodus when Elon Musk bought Twitter. Yeah, and it was great. It's like it was really hard to manage to scale. There was downtime, there was slowdowns, there was tough that I had to, my hosting provider didn't have enough servers that I needed at the moment. I had to wait for servers to be ready to install them. So yeah, it was a bit of a mess. But what I've been doing for the longest time is I only run servers at 50% load. Meaning if a server is reaching more than 50%, I just start to move people across servers. In most cases it's transparent. It's something that you don't even notice. If now I moved your instance to a new app server, you wouldn't know that you've just been moved. So it's the database. The database is the one that is trickier to move because it requires downtime. Depending on the site, it could be like a minute, as it could be like ten minutes or half an hour. But this would be warmed in the advance if that was the case. Yeah, but, so that's it. Like my main trick is to waste money. Basically. I could like run without the servers. Everything would be up today and running fine. So that is basically my trick is like, okay, it's better to have this in case there is some, some load coming in. And in terms of DDoS, it's like my OVH, my data center, they, all their services have DDoS built in protection. And it's been, yeah, it's been okay a couple of times that there were some issues, but never something that brought down the service for, for a long time or something like that. It's like, yeah, it's rare. [00:46:12] Speaker B: And is that scale? Is that so, is that scalability one of the advantages of Docker because you can just, because Docker is, for all intents and purposes of containerization platform. It means that you can just move it to another host and it is transparent in most cases. And I guess that means that, as you said, if you're only running servers at 50%, it means that if something were to happen on one of the app servers, you've got that huge buffer zone and obviously, but does cost you more. I assume these are all, I'm guessing LBH is offered vps, but there's only so much capacity on the single server. So I guess, as you said, there's going to be times where servers aren't available for you to migrate stuff into add to new servers. So it is good to see that load balancing going on. Go on, Jay. [00:47:07] Speaker A: I think, Hugo, you have really hit on a good benefit of going with Master, that host versus doing it yourself, because everybody is able to, in a, in the communal sense, we're all paying toward the, the server upkeeps. So even if, for instance, where our server is so even if our instances, like for instance, slower, one day, we're still helping the overall community and every, everybody, everybody that's, that's hosting with, with master host is benefiting from that communal aspect. And, and like I would, I would definitely say like, I like this whole, this whole concept of the data portability. And this is actually, I think, one of the reasons why I encourage people to host. But yeah, I think this is. Would you say this is probably one of the biggest benefits is to take away the complexity and take away the having to manage all this stuff. Because I was watching, on my last instance, I was in the Discord channel watching all the stuff that they were doing behind the scenes and my head was spinning just watching all of that. [00:48:18] Speaker C: Yeah, there are a lot of advantages in terms of hosting with a service like mine. And there are others like Spacebearer, I can say it like stuff like that. There are others that already provide this service and yeah, these tools that I mentioned, for instance, they are people that have been really friendly with me. I have helped them migrate people from my service to their service. They helped me migrate people from their service to my service. So we are not really that competitive. At least I don't feel it. Yeah, it's important that there are alternatives. I made. A couple of years ago, I made a 25% commitment, what I call the 25% commitment. That is like a commitment. It's on the website, on my blog posts. That is, I will not host more than 25% of the Mastodon installations because I don't want to centralize Mastodon on my end. And so it's something that was concerning me, like when I started, probably after a few months of me being hosting most of them, like, I don't know, 40% of the instances were on my end, but the community was so small. It was like, I don't know, there were like 500 servers and I had like, I don't know, 200 or something like that. Just saying from what I recall, I don't know the exact numbers but yeah, so this is something that I think it's important. So it's, I think it's great that there are alternatives now to talk about the benefits. There are several benefits about hosting. Yeah. Having somebody hosting and hosting on a shared infrastructure. First of all, it's like resource optimization, obviously. And then there are like things not having to maintain it, not having to ease of upgrades. Like you can upgrade with minimal downtime, at least on my platform. It's like, let's imagine you have an event or something like that. You can go there and upgrade a couple of plan, a couple of steps up, then the event finishes and you downgrade it and it's fine, it's like so you can, something is happening, your name popped up on the news and you can upgrade it and then when it dies down, you downgrade it again and you didn't waste a lot of money and yeah, so that's like the main benefits that I, that I can think about that come to mind. [00:51:25] Speaker B: And I want to call one thing out, which I was really impressed with is, and we'll link to this in the show notes, folks, is the update policy that you have? Well, I wouldn't say policy, but sort of the aim that you have for master dates. Now these come out every now. And now you said categorically in my help articles you're not going to install beta or release candidates onto production service. I assume in my background you're testing all those on your, on a separate testing environment to see what you know you might need to do. But you are very clearly said that when a new Mastodon instance, Mastodon version comes out, we'll go through a period of testing and then within usually a couple of hours it will be deployed. I mean from your point of view, is that, is that an easy process? Does, does Docker make that any easier or is it still very manual? I assume you've got that all automated to the hilt at this point. [00:52:21] Speaker C: My automation, my full automation is all done in bash scripts. [00:52:25] Speaker B: Oh wow. [00:52:26] Speaker C: Do you know what that is? Yes, it's a rudimentary automation, but I love it. It doesn't depend on any libraries. It doesn't depend on anything. It's like I know each line of that code, exactly what it does. Whenever something goes wrong, I can find the line and just add a comma and it's done. So it's, I love it. Yeah. So I have a batch script that I run on all the servers and. Yeah, it's not, it's not that hard to update. Mastodon. Yeah. If you follow the instructions and, and you know your way around the Linux machine, it's pretty straightforward. I think the installation and it's the scary part, but even that, I don't think it's like it just requires a lot of time for you to really go over everything and then frustration. Why doesn't this work? Where do I look for stuff? Knowing where to look for stuff, that's the hard part. But after you understand and you have been doing it for a while, it becomes easier and easier and masters and I have to give credit to Eugene and to Claire. It hasn't failed. Like, there isn't a single installation that, a stable installation that I have installed that broke, anything that broke in terms of, I don't know, deleting data or doing something funny and stopped working, stuff like that. It's been like rock solid ever since. [00:54:27] Speaker A: I've got one final question before we end. So I take it with this knowing, because Mastodon is open source, one of the, I say probably one of the only. Does Mastodon host support things like customizations to the source code or is that something that would be better suited elsewhere in that regard? [00:54:51] Speaker C: Yeah, no customization, period. In terms of code, there is no customization. You have the custom css that you can use inside your admin interface to tweak a bit in terms of the design, but in terms of code, everybody shares the same base code. That's the only way I can assure that the services, like if I allowed you to tweak something, then your instance would stop working and then I would have to waste time like, okay, why isn't this working? And then, oh, it was the code that you changed. [00:55:31] Speaker B: Yeah, absolutely. [00:55:34] Speaker C: And stuff like that. It would open the door to all sorts of security, privacy, stability reasons. You would use more resources than I planned that you should have access to. This way. It's like, I know everybody's on the same source code and it has a great advantage. That is like, for instance, you report an issue to me. I think the issue for everybody and probably all the other instances that are with me don't even realize that an issue ever existed because it's like, yeah, everybody's on the same code. While even another example is like you change something in your code. I installed the upgrade. Either I had to stop the upgrade because the code didn't merge well or I would overwrite what you have done. And then, oh, I made a change and now it's no longer there. Yeah. Because I had to upgrade the code. Yeah, yeah. So it's not viable. So people that let me now be a bit evil because a lot of people ask me about, about customization and say, oh, I want to tweak this and that and stuff like that. So for you to tweak something on an instance, you should twist your tweak. So you shouldn't be doing that in production. Obviously there are small instances where there are, it's just you or a couple of people. But if you have like 100 people on your instance, you shouldn't be testing it in production. Something that you aren't sure. So you should first test it on somewhere. If you know how to test it somewhere, you know how to install Mastodon. So you can probably run your own mastodon installation and this way you can tweak everything you want and. Yeah. And you have full control and you can optimize things and you can do things that no service managed service will ever allow you. Yeah, got you. [00:57:52] Speaker B: That makes a lot of sense. And thank you. I mean, I think one of the things we were thinking, oh, can we modify this? And it was a really simple one. And when we realized, ah, no, you have to change this at code level is the character limit on. But now there is a flip side to this is, well, hopefully in future versions of Master Dom, that sort of thing could become a admin level tweak. I mean, with a lot of software things can. I remember working for a company where certain settings were never exposed to the customers Ui, not because we didn't want to, but because we simply did not have the time for the developers to add a checkbox. And so that meant me, my support colleagues would be doing select and we'd actually be doing the updates manually by SQL, which is just as dangerous. But we did it, we staged it, we made sure it was done properly. [00:58:49] Speaker A: Anyway, so Hugo, so if people would like to start their own instance, the best place to go would be master host. [00:58:59] Speaker C: Yeah, yeah, that's it. Yeah. [00:59:01] Speaker A: And I can definitely say like, we were up and running just like that. Oh, just a quick question. Sorry, I'm just curious, when they sign up, what's the process for you. Once somebody signs up, hopefully nothing. [00:59:22] Speaker C: I'm asleep and everything goes. Yeah. So it's all automated. Somebody signs up, they choose the domain, they pay either by credit card and PayPal. Depending on countries, there may be other payment methods. They pay, as soon as they pay, they are taken, their account is created, they set up a password and then they have DNS instructions. That is the scary part. For most people to go and tweak the configuration of their domain to add a record to the DNS settings. It's pretty easy. It sounds terrible, but it's pretty easy. And as soon as DNS starts propagating, that can take a while, depending on multiple factors, but it's not something that I have any control over. Yeah, the installation is automated. Like best case scenario, you'll be up and running in like under five minutes. Then it depends on the DNS propagation, really. If, for instance, there is an option for you to use a master host subdomain, you lose portability with that. So it's something you should be aware of. But you don't have to have a domain and you don't have to have an extra expense. And if you choose a muscle subdomain, it's probably more like two, three minutes and you are up and running. So it's. Yeah, that's it. There's nothing on my end that hopefully I have to do, except if something breaks. [01:01:05] Speaker A: So, James, can I have your credit card? Because I'm starting NASA host for coffee drinkers. [01:01:12] Speaker B: Do you know what's really, really scary is that actually at the moment, my credit card and my debit cards are in closer reach to James, me right now. [01:01:21] Speaker A: Okay, hold on 1 second. [01:01:23] Speaker C: Okay. [01:01:23] Speaker A: Creating coffeedrinkers social. [01:01:26] Speaker B: I like that. [01:01:27] Speaker C: I like that. [01:01:28] Speaker B: So Hugo, we will absolutely send people your way. Thank you so much for taking the time to speak to us. Honestly, it has been enlightening. Folks, I hope you've learned a little something about what makes Mastodon unique. And do go back and listen to our earlier episode with the wonderful James Smith, who is an admin of Mastodon. Me UK for instance, me junior personally on so obviously wireless social is with Hugo in terms of our instance, but that talks a little bit more about Mastodon itself. So definitely consider this a deeper dive. Jay, thank you for hosting this episode. [01:02:06] Speaker A: And I just want to say to everybody that is listening, think about who's hosting your data, where your data is. And I want to highly encourage you all to look at starting up your own instance because again, keep your data out of the control of a centralized location. [01:02:25] Speaker B: What's the whole idea with fediverse? I think I would echo what Jay's just said with the caveat. I think, you know, for me and you personally, Jay, I think it's absolutely fine to be on an instance we trust for our personal things. But if you want to have control of your brand, if you have an organization, you really should. I think it. I don't know how you feel about this. Here you go. But for me, an organization should have full control over master instance. [01:02:49] Speaker C: Yeah. Yeah. I completely agree. I think it's like, it's a, it's the thing that most surprised me is like, that organizations keep on, like, saying, follow me on Facebook or follow me on Xbox. It makes absolutely no sense. It's like you can have 10,000 followers on Facebook. You make a post, like ten people might see it because their algorithm decides that ten people will be allowed to see it. And with Mastodon, you could control your own server, control your own domain. If you decide to move from one place to another, you take your followers with you. Everybody, there is no algorithm. So everybody that is online and accessing at that time will see your post. If you know that, I don't know. Your followers are at three in the afternoon online. You make a post at three in the afternoon and all the followers that are online will see it. It's like, for companies is really a no brainer and. Yeah, and I would like to thank you for inviting me and thank you for allowing me to talk about this, because my friends don't allow me to talk about this to them. They really don't care. They think that, yeah, they don't understand what I'm talking about. So it's really fun when somebody wants to talk with me about tech and stuff like that and the back ends of servers and Mastodon and all of that. [01:04:21] Speaker B: There we go, Jeff. And we got a new tagline for the show, talking therapy for technologists. There we go. [01:04:29] Speaker A: And I think with that, everybody, we will, we will press the end record button and I will go. And I'm going to go start, like, five more master instances. [01:04:45] Speaker B: Thanks for listening to this episode of Crosswires. We hope you've enjoyed our discussion and we'd love to hear your thoughts. So please drop us a note over to podcastwires.net dot. Why not come and join our Discord community [email protected]. Discord. We've got lots of text channels. We've even got voice channels, and we've got forum posts for every episode that we put out there. If you are Mastodon, you can also follow us either by heading over to wires social or just follow crossedyers Dot social. [01:05:14] Speaker C: If you'd like to check out more. [01:05:15] Speaker A: Of our content, head on over to Crossedwires.net YouTube for all our videos, and keep an eye on our Twitch [email protected]. Live for our upcoming streams. [01:05:26] Speaker B: If you like what we heard, please do drop a review in your podcast directory of choice. It really does help spread the word about the show. [01:05:32] Speaker A: And of course, if you can spare even the smallest amount of financial support, we'd be incredibly grateful. You can support us at Ko Fi.com crosswires, that is kofi.com crosswords. [01:05:46] Speaker B: Crossed wires until next time, thanks for listening.

Other Episodes