us.forums.blizzard.com

For at least the past four hours. Many login issues. Slow authentication, and disconnects. Blizzard CS post: https://twitter.com/BlizzardCSEU_EN/status/1672954423973142530 ![](https://lemmy.world/pictrs/image/cbf09bba-15ee-4769-9f6d-8e1fd4117e07.jpeg)

1
0
Lemmy instance broken after upgrading to 0.18.0
  • slashzero slashzero Now 100%

    For me, I have to do ‘url: “http://pictrs:8080/” ‘ instead of 127.0.0.1 in the config.hjson file

    I checked this, and my lemmy.hconf file already has the host for pictrs set to http://pictrs:80.

    The only thing that has worked so far is manually unsetting my site's image icon by unsetting it directly int he Database.

    1
  • Lemmy instance broken after upgrading to 0.18.0
  • slashzero slashzero Now 100%

    Note: this seems like it has something to do with the database, and something getting royally messed up post upgrade.

    After trying all sorts of network hacks and updates, I eventually just decided to backup my Postgres container, and nuke it.

    With a fresh Postgres DB running along with 0.18.0, my self hosted site is back online. Of course, my local post history and all my subs are gone... but at least my site is operational again.

    I'd advise anyone self-hosting to not upgrade to 0.18.0 yet.

    3
  • This is a docker setup, so to update all I did was change the lemmy-ui and lemmy versions in docker-compose.yml. Note: downgrading to 0.17.4 results in an API error instead, and still a broken site, so downgrading does not appear to be an option. Upgraded my instance to 0.18.0, and now there are errors in both lemmy-ui and lemmy backend. I see federation messages processing as usual, however trying to load the UI generates a weird error in lemmy-ui, and returns "Server Error" instead of the main page. The error in the lemmy-ui logs looks like it is trying to load the site icon via pictrs from the public facing domain, but instead trying to connect to 127.0.1.1:443 (for pictrs) and getting refused. **lemmy-ui log** ``` FetchError: request to https://SITE_URL_REDACTED/pictrs/image/a29da3fc-b6ce-4e59-82b0-1a9c94f8faed.webp failed, reason: connect ECONNREFUSED 127.0.1.1:443 at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1505:11) at ClientRequest.emit (node:events:511:28) at TLSSocket.socketErrorListener (node:_http_client:495:9) at TLSSocket.emit (node:events:511:28) at emitErrorNT (node:internal/streams/destroy:151:8) at emitErrorCloseNT (node:internal/streams/destroy:116:3) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { type: 'system', errno: 'ECONNREFUSED', code: 'ECONNREFUSED' } ``` lemmy-ui and pictrs are on the same default `lemmyinternal` network. **lemmy log errors** ``` 2023-06-23T21:10:03.153597Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: data did not match any variant of untagged enum AnnouncableActivities 0: lemmy_apub::activities::community::announce::receive at crates/apub/src/activities/community/announce.rs:46 1: lemmy_server::root_span_builder::HTTP request with http.method=POST http.scheme="http" http.host=hakbox.social http.target=/inbox otel.kind="server" request_id=35c58bff-dc83-40f7-b7f0-d885072958ab http.status_code=400 otel.status_code="OK" at src/root_span_builder.rs:16 LemmyError { message: None, inner: data did not match any variant of untagged enum AnnouncableActivities, context: SpanTrace [{ target: "lemmy_apub::activities::community::announce", name: "receive", file: "crates/apub/src/activities/community/announce.rs", line: 46 }, { target: "lemmy_server::root_span_builder", name: "HTTP request", fields: "\u{1b}[3mhttp.method\u{1b}[0m\u{1b}[2m=\u{1b}[0mPOST \u{1b}[3mhttp.scheme\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"http\" \u{1b}[3mhttp.host\u{1b}[0m\u{1b}[2m=\u{1b}[0mhakbox.social \u{1b}[3mhttp.target\u{1b}[0m\u{1b}[2m=\u{1b}[0m/inbox \u{1b}[3motel.kind\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"server\" \u{1b}[3mrequest_id\u{1b}[0m\u{1b}[2m=\u{1b}[0m35c58bff-dc83-40f7-b7f0-d885072958ab \u{1b}[3mhttp.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m400 \u{1b}[3motel.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"OK\"", file: "src/root_span_builder.rs", line: 16 }] } ``` ``` 2023-06-23T21:09:14.740187Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Other errors which are not explicitly handled 0: lemmy_server::root_span_builder::HTTP request with http.method=POST http.scheme="http" http.host=SITE_URL_REDACTED http.target=/inbox otel.kind="server" request_id=83feb464-5402-4d88-b98a-98bc0a76913d http.status_code=400 otel.status_code="OK" at src/root_span_builder.rs:16 LemmyError { message: None, inner: Other errors which are not explicitly handled Caused by: Http Signature is expired, checked Date header, checked at Fri, 23 Jun 2023 21:09:14 GMT, expired at Fri, 23 Jun 2023 21:08:14 GMT, context: SpanTrace [{ target: "lemmy_server::root_span_builder", name: "HTTP request", fields: "\u{1b}[3mhttp.method\u{1b}[0m\u{1b}[2m=\u{1b}[0mPOST \u{1b}[3mhttp.scheme\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"http\" \u{1b}[3mhttp.host\u{1b}[0m\u{1b}[2m=\u{1b}[0mSITE_URL_REDACTED \u{1b}[3mhttp.target\u{1b}[0m\u{1b}[2m=\u{1b}[0m/inbox \u{1b}[3motel.kind\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"server\" \u{1b}[3mrequest_id\u{1b}[0m\u{1b}[2m=\u{1b}[0m83feb464-5402-4d88-b98a-98bc0a76913d \u{1b}[3mhttp.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m400 \u{1b}[3motel.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"OK\"", file: "src/root_span_builder.rs", line: 16 }] } ``` I've also filed a bug, because I've been trying to troubleshoot this, but haven't found a solution yet. Any help is appreciated.

    11
    7
    Lemmy's total users exceeds 740k today, up from 540k yesterday
  • slashzero slashzero Now 100%

    I wonder if these are real users or if someone wrote a script to register users via the lemmy API… 🤔

    7
  • Lemmy's total users exceeds 740k today, up from 540k yesterday
  • slashzero slashzero Now 100%

    I self host! Very nice having an instance all to myself.

    6
  • ![](https://lemmy.world/pictrs/image/d04db531-e404-4c1d-bfc7-13af2ad3ee86.jpeg) [Twin Blades of Azzinoth](https://www.wowhead.com/item-set=699/the-twin-blades-of-azzinoth) *** 1. Obtain both original glaives from the original Black Temple on a Warrior, Rogue, Death Knight, Monk, or Demon Hunter 2. Equip both at the same time to obtain [this achievement](https://www.wowhead.com/achievement=426/warglaives-of-azzinoth) 3. Kill Illidan in the Timewalking version of Black Temple on any character (in a premade group, leader talks to Vormu to enter raid) When Illidan dies in step 3, you will get [this achievement](https://www.wowhead.com/achievement=11869/ill-hold-these-for-you-until-you-get-out), unlocking the transmog. **Theses steps have to be in this order. There are no exceptions.**

    1
    0

    Not seeing much traffic on the [!patriots@lemmy.ml](https://lemmy.ml/c/patriots) community, so decided to create one here on lemmy.world.

    1
    0
    Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 100%

    Yes. Absolutely does happen on other instances that have thousands of users.

    6
  • Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 100%

    That actually sounds like something I would have enjoyed. I joined Reddit around the time it started taking over, I think.

    5
  • Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 100%

    That’s pretty neat! I’ve honestly never seen it mentioned on Reddit before, so got a bit excited to see someone mention it here, admittedly maybe too excited.

    4
  • Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 100%

    I really hope someone is doing some level of performance testing on those changes to make sure the changes fix the performance issues.

    4
  • Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 100%

    Have you tried enabling the slow query logs @ruud@lemmy.world? I went through that exercise yesterday to try to find the root cause but my instance doesn’t have enough load to reproduce the conditions, and my day job prevents me from devoting much time to writing a load test to simulate the load.

    I did see several queries taking longer than 500ms (up to 2000ms) but they did not appear related to saving posts or comments.

    14
  • Workaround for the performance issue with posting in large communities
  • slashzero slashzero Now 84%

    Oh, Big-O notation? I never thought I’d see someone else mention big O notation out in the wild!

    :high-five:

    9
  • Lemmy.world starting guide
  • slashzero slashzero Now 100%

    I did see it, thanks. I’m hoping to find some time to contribute this week.

    1
  • Welcome to the Critical Role community
  • slashzero slashzero Now 100%

    Welcome here, and thanks for creating the CR community.

    Is it Thursday yet?

    3
  • Lemmy.world starting guide
  • slashzero slashzero Now 100%

    I suppose I could, but, I've honestly spent the majority of today on lemmy answering "support" questions for people lol... Maybe I can try to take a look tomorrow. 🤷

    Actually, saving edits on lemmy.ml is also slow, about 4-5 seconds. It’s probably a combination of user load and non-optimized queries.

    2
  • Lemmy.world starting guide
  • slashzero slashzero Now 100%

    Would be funny if it was missing an index and doing a full table scan for some odd reason...

    I focus on application performance in my day-to-day work, and missing indexes, greedy upoptimized queries, etc, are the root of a lot of issues. Hopefully you can get to the bottom of it.

    Quick note: I'm not seeing a big delay (10+ seconds) when posting or saving on lemmy.ml, or my own instance.

    2
  • Hello! Can we normalize a few things?
  • slashzero slashzero Now 100%

    OMG! You are right. It’s my time to shine!

    8
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLE
    Jump
    different states of the same post in different instances?
  • slashzero slashzero Now 100%

    This sounds like a bug…

    1
  • Installed Lemmy 0.17.4-rc1
  • slashzero slashzero Now 100%

    The footer still shows the old versions, by the way. It does feel snappier than earlier for sure.

    I’m setting up my own instance to mess around with as well. I’ve got it running via Docker. The SSL setup was a pain, not really documented well. Also pictrs is giving me issues complaining about DB access.

    The redirect is tricky and works in some browsers, but not others.

    Edit: save took about 10 seconds, which is better than 20-30 seconds!

    Edit 2: the footer shows the updated versions, now.

    2
  • Lemmy.world improvements and issues
  • slashzero slashzero Now 100%

    Actually the approval email was in my spam folder. My bad!

    Thanks for looking into the save issue. I’m looking into setting up my own instance now on a digital ocean droplet maybe, so I can maybe debug a few things on my own.

    2
  • slashzero Now
    6 23

    Slashzero

    slashzero@ lemmy.world