Why are so many young people taking hallucinogens?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 93%

    Shrooms are astoundingly awesome. It cured my depression and I have been much happier since. I ate 11.7 grams of high power dried shrooms and had the craziest experience of my life a few months ago. Got stuck in a time loop, lived many lives, became other people, was a duck spirit. It was a journey unlike anything imaginable without them. We knew we were in for a hell of a ride when the buzz was strong less than 5 minutes after eating them. The buzz doubled in intensity every 5 minutes and 20 minutes in I fell through a checkered white tunnel with floating honey falling above me. Landed on a dark circular lawn looking up at the divine tower of music as me and my fellow duck spirit friends stood in a circle worshiping the music. It got so much crazier after that. I did piss myself but read after, that is par for the course at such extreme doses

    13
  • Anyone else noticing a lot more "Access Denied" pages while using hardened Firefox lately?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%

    It's almost always cloudflare. Fucking cunts won't ever allow my browser to any site using their "services". Then there is Hcaptcha, "Solve my puzzles till the end of time in a fucking loop, and no, you're never getting into the site". I hate them

    7
  • cross-posted from: https://lemm.ee/post/23155648 > Here is the script. > ``` > > #!/usr/bin/env bash > # Download and search youtube subs > # deps yt-dlp ,awk, perl, any one or more of either ugrep, ripgrep, grep > # usage "script youtube_url" > > > main() { > url="$@" > check_if_url > get_video_id > search_for_downloaded_matching_files > set_download_boolean_flag > download_subs > read_and_format_transcript_file > echo_description_file > user_search > } > > > # Iterate over the array and add items to the new array if they match the regex > check_if_url() { > local regex='^https://[^[:space:]]+$' > if ! [[ $url =~ $regex ]]; then > echo "Invalid input. Valid input is a url matching regex ${regex}" > exit 1 > fi > } > > > get_video_id() { > video_id=$(echo "$url" | sed -n 's/.*v=\([^&]*\).*/\1/p') > } > > > search_for_downloaded_matching_files() { > # Find newest created files matching the video_id > transcript_file="$( /usr/bin/ls -t --time=creation "$PWD"/*${video_id}*\.vtt 2>/dev/null | head -n 1 )" > description_file="$( /usr/bin/ls -t --time=creation "$PWD"/*${video_id}*\.description 2>/dev/null | head -n 1 )" > } > > > set_download_boolean_flag() { > if [ -n "$transcript_file" ] && [ -n "$description_file" ]; then > download=0 # FALSE > else > download=1 # TRUE > fi > } > > > download_subs() { > if [ "$download" -eq 1 ]; then > yt-dlp --restrict-filenames --write-auto-sub --skip-download "${url}" > yt-dlp --restrict-filenames --sub-langs=eng --write-subs --skip-download "${url}" > yt-dlp --restrict-filenames --write-description --skip-download "${url}" > # Search files again since they were just downloaded > search_for_downloaded_matching_files > fi > } > > > read_and_format_transcript_file() { > perl_removed_dupes="$(perl -0777 -pe 's/^\d\d.*\n.*\n.*<\/c>//gm' <"${transcript_file}")" > local prefix="https://www.youtube.com/watch?v=${video_id}&t=" > local suffix="s" > formated_transcript_file="$(awk -v pre="$prefix" -v suf="$suffix" ' > /^([0-9]{2}:){2}[0-9]{2}\.[0-9]{3}/ { > split($1, a, /[:.]/); > $1 = pre (int(a[1]*3600 + a[2]*60 + a[3]) - 3) suf; > sub(/ --> [0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{3}/, ""); > sub(/ align:start position:0%$/, ""); > print; > next; > } > { > sub(/ align:start position:0%$/, ""); > print; > } > ' <<<"${perl_removed_dupes}")" > #CRLF for ugrep to avoid ?bug? where before lines are not all outputted > formated_transcript_file_CRLF=$(printf '%b' "$formated_transcript_file" | sed 's/$/\r/') > } > > > echo_description_file() { > cat "${description_file}" > } > > > user_search() { > echo -e "\n\n" > read -rp "Enter regex (read as raw input): " search_term > > : ${app_count:=0} > > if command -v ug >/dev/null 2>&1; then > echo -e "\n\n\n\n" > echo "Ugrep output" > ug --pretty=never -B2 -A1 -i -Z+-~1 -e "${search_term}" --andnot "^https?:\/\/" <<<"$formated_transcript_file_CRLF" > ((app_count++)) > fi > > if command -v rg >/dev/null 2>&1; then > echo -e "\n\n\n\n" > echo "Ripgrep output" > rg -iP -B2 -A7 "^(?!https?:\/\/).*\K${search_term}" <<<"$formated_transcript_file" > ((app_count++)) > fi > > if [ "$app_count" -eq 0 ]; then > echo -e "\n\n\n\n" > echo "Grep output" > grep -iP -B2 -A1 "${search_term}" <<<"$formated_transcript_file" > echo -e "\n\n" > echo "Consider installing ripgrep and ugrep for better search" > ((app_count++)) > fi > } > > > main "$@" > > > > ```

    8
    0

    Here is the script. ``` #!/usr/bin/env bash # Download and search youtube subs # deps yt-dlp ,awk, perl, any one or more of either ugrep, ripgrep, grep # usage "script youtube_url" main() { url="$@" check_if_url get_video_id search_for_downloaded_matching_files set_download_boolean_flag download_subs read_and_format_transcript_file echo_description_file user_search } # Iterate over the array and add items to the new array if they match the regex check_if_url() { local regex='^https://[^[:space:]]+$' if ! [[ $url =~ $regex ]]; then echo "Invalid input. Valid input is a url matching regex ${regex}" exit 1 fi } get_video_id() { video_id=$(echo "$url" | sed -n 's/.*v=\([^&]*\).*/\1/p') } search_for_downloaded_matching_files() { # Find newest created files matching the video_id transcript_file="$( /usr/bin/ls -t --time=creation "$PWD"/*${video_id}*\.vtt 2>/dev/null | head -n 1 )" description_file="$( /usr/bin/ls -t --time=creation "$PWD"/*${video_id}*\.description 2>/dev/null | head -n 1 )" } set_download_boolean_flag() { if [ -n "$transcript_file" ] && [ -n "$description_file" ]; then download=0 # FALSE else download=1 # TRUE fi } download_subs() { if [ "$download" -eq 1 ]; then yt-dlp --restrict-filenames --write-auto-sub --skip-download "${url}" yt-dlp --restrict-filenames --sub-langs=eng --write-subs --skip-download "${url}" yt-dlp --restrict-filenames --write-description --skip-download "${url}" # Search files again since they were just downloaded search_for_downloaded_matching_files fi } read_and_format_transcript_file() { perl_removed_dupes="$(perl -0777 -pe 's/^\d\d.*\n.*\n.*<\/c>//gm' <"${transcript_file}")" local prefix="https://www.youtube.com/watch?v=${video_id}&t=" local suffix="s" formated_transcript_file="$(awk -v pre="$prefix" -v suf="$suffix" ' /^([0-9]{2}:){2}[0-9]{2}\.[0-9]{3}/ { split($1, a, /[:.]/); $1 = pre (int(a[1]*3600 + a[2]*60 + a[3]) - 3) suf; sub(/ --> [0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{3}/, ""); sub(/ align:start position:0%$/, ""); print; next; } { sub(/ align:start position:0%$/, ""); print; } ' <<<"${perl_removed_dupes}")" #CRLF for ugrep to avoid ?bug? where before lines are not all outputted formated_transcript_file_CRLF=$(printf '%b' "$formated_transcript_file" | sed 's/$/\r/') } echo_description_file() { cat "${description_file}" } user_search() { echo -e "\n\n" read -rp "Enter regex (read as raw input): " search_term : ${app_count:=0} if command -v ug >/dev/null 2>&1; then echo -e "\n\n\n\n" echo "Ugrep output" ug --pretty=never -B2 -A1 -i -Z+-~1 -e "${search_term}" --andnot "^https?:\/\/" <<<"$formated_transcript_file_CRLF" ((app_count++)) fi if command -v rg >/dev/null 2>&1; then echo -e "\n\n\n\n" echo "Ripgrep output" rg -iP -B2 -A7 "^(?!https?:\/\/).*\K${search_term}" <<<"$formated_transcript_file" ((app_count++)) fi if [ "$app_count" -eq 0 ]; then echo -e "\n\n\n\n" echo "Grep output" grep -iP -B2 -A1 "${search_term}" <<<"$formated_transcript_file" echo -e "\n\n" echo "Consider installing ripgrep and ugrep for better search" ((app_count++)) fi } main "$@" ```

    4
    0
    The total combat losses of the enemy from 24.02.2022 to 18.12.2023
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%
    Category Losses
    Aircraft 324
    Anti-Aircraft Warfare Systems 610 +1
    Armoured Personnel Vehicle 10752 +60
    Artillery Systems 8175 +38
    Cruise Missiles 1610
    Helicopters 324
    MLRS 926 +3
    Personnel ~347160 +1090
    Special Equipment 1198 +4
    Submarines 1
    Tanks 5783 +44
    UAV Operational-Tactical Level 6290 +12
    Vehicles & Fuel Tanks 10822 +56
    Warships/Boats 22
    3
  • I made a script that downloads from youtube super fast using a custom aria2 build. Aria2 https://github.com/P3TERX/Aria2-Pro-Core/releases ffmpeg build https://github.com/yt-dlp/FFmpeg-Builds/releases I choose ffmpeg-master-latest-linux64-gpl.tar.xz ``` #!/usr/bin/env bash #set -x if [[ -z $@ ]]; then echo "specify download url" exit fi dir_dl="$PWD" url="$@" ffmpeg_dir="$HOME/.local/bin.notpath/" download_archive_dir="$HOME/Videos/yt-dlp/" download_archive_filename=".yt-dlp-archived-done.txt" mkdir -p "$download_archive_dir" youtube_match_regex='^.*(youtube[.]com|youtu[.]be|youtube-nocookie[.]com).*$' if [[ "$1" =~ $youtube_match_regex ]]; then url="$(echo "$@" | perl -pe 's/((?:http:|https:)*?\/\/(?:www\.|)(?:youtube\.com|m\.youtube\.com|youtu\.|#youtube-nocookie\.com).*(?:c(?:hannel)?\/|u(?:ser)?\/|v=|v%3D|v\/|(?:a|p)\/(?:a|u)\/\d.*\/|watch\?|vi(?:=|\/)|\/#embed\/|oembed\?|be\/|e\/)([^&amp;?%#\/\n]+)).*/$1/gm')" yt-dlp \ --check-formats \ --clean-info-json \ --download-archive "$download_archive_dir$download_archive_filename" \ --embed-chapters \ --embed-info-json \ --embed-metadata \ --embed-thumbnail \ --external-downloader aria2c \ --downloader-args \ "aria2c: \ --allow-piece-length-change=true \ --check-certificate=false \ --console-log-level=notice \ --content-disposition-default-utf8=true \ --continue=true \ --disk-cache=8192 \ --download-result=full \ --enable-mmap \ --file-allocation=falloc \ --lowest-speed-limit=100K \ --max-concurrent-downloads=16 \ --max-connection-per-server=64 \ --max-mmap-limit=8192M \ --max-resume-failure-tries=5 \ --max-file-not-found=2 \ --max-tries=3 \ --min-split-size=64K \ --no-file-allocation-limit=8192M \ --piece-length=64k \ --realtime-chunk-checksum=false \ --retry-on-400=true \ --retry-on-403=true \ --retry-on-406=true \ --retry-on-unknown=true \ --retry-wait=1 \ --split=32 \ --stream-piece-selector=geom \ --summary-interval=0 " \ --ffmpeg-location "$ffmpeg_dir" \ --output "$dir_dl"'/%(channel)s/%(title)s_%(channel)s_%(upload_date>%Y-%m-%d)s_%(duration>%H-%M-%S)s_%(resolution)s.%(ext)s' \ --prefer-free-formats \ --remux-video mkv \ --restrict-filenames \ --sponsorblock-remove "filler,interaction,intro,music_offtopic,outro,preview,selfpromo,sponsor" \ --sub-langs "en.*,live_chat" \ --write-auto-subs \ --write-description \ --write-info-json \ --write-playlist-metafiles \ --write-subs \ --write-thumbnail \ "$url" else yt-dlp \ --download-archive "$download_archive_dir$download_archive_filename" \ --embed-chapters \ --ffmpeg-location "$ffmpeg_dir" \ --http-chunk-size 10M \ --output "$dir_dl/%(title)s_%(duration>%H-%M-%S)s_%(upload_date>%Y-%m-%d)s_%(resolution)s_URL_(%(id)s).%(ext)s" \ --prefer-free-formats \ --restrict-filenames \ "$url" fi ```

    9
    0
    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%

    Hydrogen is a nightmare to store and requires a lot of energy to compress. If you try to store it in a metal container, it will seap through the metal, turn the metal brittle, and explode it. Storage is the reason we aren't all using it right now

    3
  • Streaming costs are rising, and there are more platforms than ever to choose between. Some people are going back to piracy
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%

    I have always been a sailor but did pay for Netflix for 6-7 years mainly to share the account with others. When content evaporated I dropped it. I paid HBO streaming to support Game of Thrones and then dropped it too. I always watched WEB-DL even when paying for a service because of quality

    6
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearDA
    Jump
    15.36TB SSD SAMSUNG PM1633A SAS How do I connect it?
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearDA
    Jump
    15.36TB SSD SAMSUNG PM1633A SAS How do I connect it?
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearDA
    Jump
    15.36TB SSD SAMSUNG PM1633A SAS How do I connect it?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%

    Cool, I will give it a try. I may install it to my linux machine and access over samba. I need to test if game installs will work over network like this first

    2
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearDA
    Jump
    15.36TB SSD SAMSUNG PM1633A SAS How do I connect it?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    cmysmiaczxotoy
    Now 100%

    Sure, I would love too. Do you know what software to run in Windows to provide good results? I can temporarily attach it to my linux machine if necessary

    2
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearDA
    15.36TB SSD SAMSUNG PM1633A SAS How do I connect it?

    I bought a 15.36TB SSD SAMSUNG PM1633A SAS MZ-ILS15TA DELL EMC MZ1LS15THMLS-000D4 I am trying to figure out what to buy in order to connect it to my desktop PC via PCIE. Is this a viable or recommended solution? SFF-8643 to SFF-8639 cable Dell LSI 9311-8i 8-port Internal 12G SAS PCle x8 Host Bus RAID Adapter 3YDX4

    9
    11
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearCM
    Now
    4 57

    cmysmiaczxotoy

    lemm.ee