id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,910,960
Written Formatting Is A Technology
American Standard Academic casing is how a college paper should be formatted. Anything academic is a...
0
2024-07-04T01:46:58
https://dev.to/theholyspirit/written-formatting-is-a-technology-36pb
tool, technology, language, human
American Standard Academic casing is how a college paper should be formatted. Anything academic is a college paper. GNRG Script Indicates Good Energy Is Imprinted By The Author A good number of papers I have written over the past research period have been in GNRG Script. The research activity concluded with the understanding that it is simple to get into the habit of writing in GNRG Script. [Bonus B-Box Content Available At Alexason.Com](https://alexason.com/written-formatting-is-a-tool/)
theholyspirit
1,910,959
OCR con Tesseract en Windows
Tesseract es open source y es uno de los OCR que funcionan con mas exactitud. Simplemente posteare...
0
2024-07-04T01:41:22
https://dev.to/jmrsuarez/ocr-con-tesseract-49m3
Tesseract es open source y es uno de los OCR que funcionan con mas exactitud. Simplemente posteare aqui peque;os retacitos de codigo para facilitar retomar el hilo. Una tarea basica, es escanear muchos imagenes y pasarlas a un archivo pdf o de texto, asi es como lo logre 1. Instalamos tesseract del binario de windows: https://github.com/UB-Mannheim/tesseract/wiki 2. agregar en path la ruta al binario ejecutable de tessaract 3. poner en un subdirectorio todas las imagenes 4. con CMD clasico, obtener un listado de todas las imagenes para hacer el OCR, use este comando dir /b >imagenes.txt el archivo imagenes queda por dentro por ejemplo asi: 01.jpeg 02.jpeg 03.jpeg 04.jpeg 05.jpeg ... 5. En powershell la entrada sera el archivo que tiene el nombre de cada imagen a la que se le hara el OCR y de salida se usa un solo archivo de texto en donde se pondran todo el texto resultado del proceso de OCR, les comparto el ejemplo uso la siguiente linea que corre en Powershell: tesseract C:\Users\usuario\Downloads\ocr\imagenes.txt salida --psm 3 Este comando creara el archivo "salida.txt" que agregara la salida en texto plano de todas las imagenes listadas en el archivo "imagenes.txt" **Fuentes:** **La ayuda de tesseract:** https://tesseract-ocr.github.io/tessdoc/Command-Line-Usage.html https://github.com/tesseract-ocr/tesseract/releases/ **Para hacer la lista de nombres de archivos de imagenes:** https://superuser.com/questions/395836/how-to-copy-a-list-of-file-names-to-text-file https://superuser.com/questions/870905/windows-command-line-dir-command-to-display-only-the-file-name-in-8-3-format https://github.com/tesseract-ocr/tessdoc https://github.com/UB-Mannheim/tesseract/wiki/Install-additional-language-and-script-models De aqui supe de la existencia de tesseract: https://www.reddit.com/r/DataHoarder/ **OCR en linea mas o menos bueno:** Casi bueno, pero me manda paginas en blanco, quiza solo sea en la version gratuita y ya en la version de paga mande todo, pero nunca he probado la version de paga https://www.onlineocr.net/pdf-to-word
jmrsuarez
1,910,957
Definitive guide to Higher Order Function (HOC)
Higher Order Functions in Javascript Introduction In this article, we will...
0
2024-07-04T01:31:17
https://dev.to/codewithjohnson/definitive-guide-to-higher-order-function-hoc-28b5
webdev, javascript, beginners, programming
## Higher Order Functions in Javascript ## Introduction In this article, we will learn about higher order functions in JavaScript. We will learn what higher order functions are, how to create them, and how to use them. We will also learn about some common higher order functions in JavaScript. ## What are Higher Order Functions? A higher order function is a function that takes one or more functions as arguments or returns a function as its result. In other words, a higher order function is a function that operates on other functions. Higher order functions are a powerful feature of JavaScript that allows us to write more concise and expressive code. You are probably already familiar with higher order functions in JavaScript. For example, the `map`, `filter`, and `reduce` functions are all higher order functions. These functions take a function as an argument and apply that function to each element of an array. ## Creating Higher Order Functions Creating a higher order function in JavaScript is easy. You can define a function that takes another function as an argument and then call that function inside the body of the higher order function. Here is an example of a higher order function that takes a function as an argument: ```javascript function higherOrderFunction(callback) { console.log('Inside higher order function'); callback(); } function callback() { console.log('Inside callback function'); // do some other work } ``` In this example, the `higherOrderFunction` function takes a `callback` function as an argument and then calls that function inside the body of the higher order function. ## Using Higher Order Functions You can use higher order functions in JavaScript to write more concise and expressive code. For example, you can use the `map` function to transform an array of values into a new array of values. Here is an example: ```javascript const numbers = [1, 2, 3, 4, 5]; const doubledNumbers = numbers.map(function(number) { return number * 2; }); console.log(doubledNumbers); // [2, 4, 6, 8, 10] ``` In this example, we use the `map` function to double each number in the `numbers` array. The `map` function takes a function as an argument and applies that function to each element of the array. ## Common Higher Order Functions There are many common higher order functions in JavaScript that you can use to write more concise and expressive code. Some of the most common higher order functions are: ### `map` The `map` function transforms an array of values into a new array of values. It takes a function as an argument and applies that function to each element of the array. ```javascript const numbers = [1, 2, 3, 4, 5]; const doubledNumbers = numbers.map(function(number) { return number * 2; }); console.log(doubledNumbers); // [2, 4, 6, 8, 10] ``` ### `filter` The `filter` function filters an array of values based on a condition. It takes a function as an argument and applies that function to each element of the array. If the function returns `true`, the element is included in the new array; otherwise, it is excluded. ```javascript const numbers = [1, 2, 3, 4, 5]; const evenNumbers = numbers.filter(function(number) { return number % 2 === 0; }); console.log(evenNumbers); // [2, 4] ``` ### `reduce` The `reduce` function reduces an array of values to a single value. It takes a function as an argument and applies that function to each element of the array, accumulating a single result. ```javascript const numbers = [1, 2, 3, 4, 5]; const sum = numbers.reduce(function(accumulator, number) { return accumulator + number; }, 0); console.log(sum); // 15 ``` ## Custom Higher Order Functions You can also create your own custom higher order functions in JavaScript. For example, you can create a function that composes two functions together: ```javascript function compose(f, g) { return function(x) { return f(g(x)); }; } function addOne(x) { return x + 1; } function multiplyByTwo(x) { return x * 2; } const addOneAndMultiplyByTwo = compose(multiplyByTwo, addOne); console.log(addOneAndMultiplyByTwo(3)); // 8 ``` In this example, we define a `compose` function that takes two functions `f` and `g` as arguments and returns a new function that composes them together. We then define two functions `addOne` and `multiplyByTwo` and compose them together to create a new function `addOneAndMultiplyByTwo`. ## Conclusion In this article, we learned about higher order functions in JavaScript. We learned what higher order functions are, how to create them, and how to use them. We also learned about some common higher order functions in JavaScript. Higher order functions are a powerful feature of JavaScript that allows us to write more concise and expressive code. By using higher order functions, you can make your code more modular, reusable, and easier to understand.
codewithjohnson
1,910,956
HOW TO HIRE A HACKER TO RECOVER LOST FUNDS (CYBERPUNKΒ PROGRAMMERS)
If you've ever felt the gut-wrenching pain of falling victim to an online scam, you know the feeling...
0
2024-07-04T01:31:10
https://dev.to/alma_calliope_6c483cc01a7/how-to-hire-a-hacker-to-recover-lost-fundscyberpunk-programmers-51pn
lostcrypto, hireahackeronline, anonymoushelp
If you've ever felt the gut-wrenching pain of falling victim to an online scam, you know the feeling of helplessness that follows. I was in that very situation not too long ago. Let me share my story with you. It all started with a seemingly promising investment opportunity on two websites: bit-run and coinvest. With the lure of quick profits, I decided to take the plunge and invested $1500. To my amazement, my investment seemingly skyrocketed to $50,000 within just five days. Excited and eager to cash out my earnings, I initiated a withdrawal request, only to be met with a frustrating demand – I was asked to pay 20% of my earnings, a whopping $14,000 before I could access my funds. Feeling hesitant but still hopeful, I complied with their request, believing that it was just a small price to pay for the substantial profits awaiting me. However, my optimism quickly turned to dismay when, the very next day, I was informed that an additional $5000 payment was required for my withdrawal to be processed. It was then that I realized I had fallen victim to a scam. Feeling devastated and betrayed, I thought I had lost my hard-earned money for good. That was until I stumbled upon a beacon of hope amidst the darkness – Cyberpunk programmers. Intrigued by the numerous positive reviews attesting to their ability to recover stolen bitcoins and funds, I decided to give them a chance. And let me tell you, it was a decision that changed everything. From the moment I reached out to Cyberpunk Programmers, I was met with empathy, and a sense of urgency to right the wrongs that had been done to me. Their team of experts guided me through the entire recovery process, providing regular updates and reassurance every step of the way. Despite my initial skepticism, they instilled in me a newfound sense of hope Cyberpunk Programmers is clients' success is 100%. Unlike other recovery services that promise the world but deliver little, Cyberpunk Programmers delivers tangible results. Within just a week, they were able to recover all of my stolen bitcoins, restoring not only my financial stability. But their impact goes beyond just recovering lost funds – it's about restoring dignity, trust, and peace of mind to individuals who have been wronged by unscrupulous scammers. With Cyberpunk Programmers by your side, you're not just another case number – you're a valued client deserving of justice and restitution. if you've been a victim of online fraud or scam, don't despair. Reach out to Cyberpunk Programmers and let them be the guiding light in your journey towards reclaiming what's rightfully yours. Their expertise, integrity, and dedication are unmatched, and I can say with absolute certainty that they have earned my trust and eternal gratitude. Thankyou, Cyberpunk Programmers, for giving me a second chance and restoring my faith in the goodness of cryptocurrencies. send them a message via email; cyberpunk @ programmers . net for a complaintΒ likeΒ this.
alma_calliope_6c483cc01a7
1,910,941
Introduction to Data Engineering with Apache Airflow
Introduction Data engineering plays a crucial role in the success of any data-driven...
0
2024-07-04T00:33:06
https://dev.to/kartikmehta8/introduction-to-data-engineering-with-apache-airflow-3m41
javascript, beginners, programming, tutorial
## Introduction Data engineering plays a crucial role in the success of any data-driven organization. It involves the process of designing, building, and managing data pipelines to efficiently and reliably move data from various sources to storage and processing systems. One of the most popular tools for data engineering is Apache Airflow. It is an open-source workflow management platform that allows users to easily create, schedule, and monitor data pipelines. ## Advantages of Apache Airflow 1. **Scalability:** Apache Airflow is highly scalable, making it suitable for both small and large data pipelines. It can easily handle thousands of tasks and processes, making it an ideal tool for data engineering in organizations of any size. 2. **Easy to use:** Apache Airflow has a user-friendly interface that allows users to easily create and schedule data pipelines. It also provides a visual representation of workflows, making it easier to monitor and troubleshoot any issues. 3. **Extensible:** Apache Airflow has a modular architecture that allows for easy integration with other tools and systems. This makes it highly customizable and adaptable to different data engineering needs. ## Disadvantages of Apache Airflow 1. **Steep learning curve:** While Apache Airflow is relatively easy to use, it does have a learning curve for beginners. Users with no prior knowledge of data engineering may find it challenging to understand and utilize all its features effectively. 2. **Limited debugging tools:** Apache Airflow lacks advanced debugging tools, making it difficult to identify and fix errors or issues in data pipelines. ## Features of Apache Airflow 1. **DAGs (Directed Acyclic Graphs):** Apache Airflow uses DAGs to define workflows and dependencies between tasks. This allows for more flexibility and control over data pipelines. ```python from airflow import DAG from airflow.operators.bash_operator import BashOperator from datetime import datetime, timedelta default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(2021, 1, 1), 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5), } dag = DAG('tutorial', default_args=default_args, schedule_interval=timedelta(days=1)) t1 = BashOperator( task_id='print_date', bash_command='date', dag=dag, ) t2 = BashOperator( task_id='sleep', bash_command='sleep 5', retries=3, dag=dag, ) t1 >> t2 ``` 2. **Job scheduling:** Apache Airflow allows for the easy scheduling of tasks, either through a command-line interface or a web-based dashboard. ## Conclusion Apache Airflow is an essential tool for data engineering that offers many advantages such as scalability, ease of use, and extensibility. However, it also has some limitations, including a steep learning curve and limited debugging tools. Overall, Apache Airflow is a powerful and customizable platform that can significantly enhance the efficiency and reliability of data pipelines in any organization.
kartikmehta8
1,910,953
Manjakan Indra Anda dengan Game Slot Playstar: Grafik Menakjubkan dan Bonus Berlimpah Menanti Untuk Para Pemain Yang Baru Daftar
Manjakan Indra Anda dengan Game Slot Playstar: Grafik Menakjubkan dan Bonus Berlimpah...
0
2024-07-04T01:26:28
https://dev.to/donquixote_doflamingo_f61/manjakan-indra-anda-dengan-game-slot-playstar-grafik-menakjubkan-dan-bonus-berlimpah-menanti-untuk-para-pemain-yang-baru-daftar-1b0e
Manjakan Indra Anda dengan Game Slot Playstar: Grafik Menakjubkan dan Bonus Berlimpah Menanti Untuk Para Pemain Yang Baru Daftar ================================================================================================================================ ![PlayStar Review & Online Casinos List](https://www.worldcasinodirectory.com/images/content/playstar_casino_1669164011203.jpg) Playstar Slot Games yang dikenal sebagai game slot 200 taruhan terpercaya dan terbaik di Indonesia telah mendapatkan reputasi yang kuat di kalangan pecinta slot online. Menawarkan beragam varian permainan dengan tema menawan dan fitur menarik, Playstar Slot Games memberi pemain pengalaman bermain yang mendebarkan dan mendalam. Sebagai aplikasi seluler, Playstar memungkinkan pengguna mengakses berbagai kotak harta karun dengan mudah untuk membuka hadiah populer, menambahkan elemen kegembiraan dan kejutan pada gameplay. Baik pemain mencari tema slot petualangan, fantasi, atau klasik, Permainan Slot Playstar melayani berbagai preferensi, menjadikannya pilihan populer bagi penggemar slot online. Salah satu fitur unik dari Playstar Slot Games terletak pada gameplaynya, yang menggabungkan visual yang menarik, navigasi yang mulus, dan antarmuka yang ramah pengguna. Pemain dapat menjelajahi berbagai tema slot dan mekanisme permainan, masing-masing dirancang untuk menawarkan pengalaman dan tantangan berbeda. Selain itu, Playstar menawarkan tingkat Return to Player (RTP) yang tinggi, seperti 98,8% pada beberapa game, sehingga secara signifikan meningkatkan peluang pemain untuk menang. Komitmen untuk menyediakan gameplay yang adil dan bermanfaat membedakan Playstar dari platform slot online lainnya, menarik pemain yang menghargai transparansi dan keadilan dalam pengalaman bermain game mereka. Selain gameplaynya yang menawan dan tingkat RTP yang tinggi, Playstar Slot Games juga memperkenalkan pemain pada dunia bonus dan hadiah yang meningkatkan pengalaman bermain game secara keseluruhan. Dari bonus deposit hingga penawaran uang kembali dan turnamen, Playstar memastikan bahwa pemain terus diberi penghargaan atas kesetiaan dan keterlibatan mereka. Dengan menawarkan bonus dan promosi yang melimpah, Playstar dari Β  [dewapoker](https://31.14.238.37/) tidak hanya menarik pemain baru tetapi juga memberikan insentif kepada pemain yang sudah ada untuk terus menjelajahi beragam permainan slot yang tersedia di platform. Melalui kombinasi grafis yang memukau, gameplay yang menarik, dan bonus yang melimpah, Playstar Slot Games memberikan pengalaman bermain game online yang komprehensif dan menyenangkan bagi pemain dari semua level. Daya Tarik Visual Permainan Slot Playstar ----------------------------------------- ![RATUSLOT: Daftar Slot DEWIRATU212 Pasti Gacor Hari Ini](https://inicdnku.xyz/assets/images/daftar.gif) Permainan slot Playstar terkenal dengan grafisnya yang memukau dan animasinya yang menawan, menciptakan pengalaman bermain game yang menarik secara visual bagi para pemainnya. Detail rumit, warna-warna cerah, dan animasi halus dari slot Playstar meningkatkan gameplay ke tingkat kegembiraan dan hiburan baru. Elemen-elemen yang menarik secara visual ini tidak hanya meningkatkan estetika game secara keseluruhan tetapi juga membawa pemain ke dalam dunia fantasi dan petualangan, menjadikan pengalaman bermain game benar-benar menarik dan menyenangkan. Pemain [poker88](https://94.131.100.115/) ditarik ke dalam dunia visual memukau yang meningkatkan sensasi setiap putaran dan kemenangan. - Grafik menakjubkan dan animasi menawan meningkatkan pengalaman bermain game. - Warna-warna cerah dan detail rumit menciptakan lingkungan yang menarik secara visual. - Animasi halus menambah elemen kegembiraan dan hiburan pada gameplay. Salah satu daya tarik utama permainan slot Playstar adalah tema dan desainnya yang imersif, yang membawa pemain ke dunia yang beragam dan mempesona. Baik pemain mencari sensasi menjelajahi peradaban kuno, misteri alam bawah laut, atau kegembiraan petualangan luar angkasa, slot Playstar menawarkan beragam tema untuk disesuaikan dengan setiap preferensi. Perhatian terhadap detail dalam desain tematik game ini memastikan bahwa pemain benar-benar tenggelam dalam alur cerita dan latar, menumbuhkan rasa pelarian dan keajaiban selama bermain game. - Beragam tema memenuhi berbagai preferensi pemain. - Perhatian terhadap detail meningkatkan pengalaman mendalam bagi pemain. - Desain tematik menumbuhkan rasa pelarian dan keajaiban selama bermain game. Selain grafis yang memukau dan tema yang imersif, permainan slot Playstar juga menampilkan efek visual dan soundtrack yang meningkatkan pengalaman bermain game secara keseluruhan. Efek visual dinamis, termasuk animasi, transisi, dan efek khusus, menambah kedalaman dan kegembiraan pada setiap putaran gulungan. Selain itu, soundtrack dan efek audio yang dibuat dengan cermat melengkapi tema game, menciptakan pengalaman multisensori yang memikat pemain [dominobet](https://185.170.214.206/) dan membuat mereka tetap terlibat selama berjam-jam. - Efek visual dan animasi dinamis menambah kedalaman dan kegembiraan pada gameplay. - Soundtrack yang dibuat dengan cermat melengkapi tema game untuk pengalaman multisensor. - Efek suara dan elemen audio meningkatkan keterlibatan dan pengalaman pemain dalam dunia game. ### Bonus dan Hadiah Berlimpah untuk Pemain Baru Mendaftar Pemain yang baru mendaftar di Playstar Slot Games akan disambut dengan sejumlah besar bonus sambutan dan putaran gratis, menyiapkan panggung untuk pengalaman bermain game yang mengasyikkan. Bonus awal ini berfungsi sebagai sambutan hangat di platform dan memberikan pemain kesempatan untuk menjelajahi berbagai permainan slot dengan peluang menang yang lebih besar. Selain itu, promosi yang sedang berlangsung seperti bonus deposit, hadiah uang kembali, dan putaran gratis menambah lapisan kegembiraan dan nilai ekstra bagi pemain, mendorong mereka untuk terus menikmati beragam pilihan permainan yang tersedia. Ketersediaan bonus tersebut tidak hanya meningkatkan pengalaman bermain game tetapi juga meningkatkan kepuasan dan keterlibatan pemain secara keseluruhan dengan platform. Untuk lebih menghargai loyalitas pemain dan meningkatkan perjalanan bermain mereka, Playstar Slot Games menawarkan program loyalitas yang kuat dan hadiah VIP. Program-program ini dirancang untuk mengakui dan menghargai pemain yang sering mengunjungi platform ini, memberikan mereka manfaat eksklusif, hadiah yang dipersonalisasi, dan hak istimewa khusus. Dengan berpartisipasi dalam program loyalitas ini, pemain dapat memperoleh keuntungan tambahan seperti: - Persentase bonus lebih tinggi - Proses penarikan lebih cepat - Dukungan pelanggan khusus - Akses eksklusif ke acara dan turnamen VIP Penerapan [domino88](https://67.205.148.8/) adalah program loyalitas tidak hanya memberikan insentif untuk melanjutkan permainan tetapi juga menumbuhkan rasa kebersamaan dan rasa memiliki di antara para pemain, sehingga meningkatkan pengalaman bermain game secara keseluruhan. Selain bonus sambutan dan program loyalitas, Playstar Slot Games menjaga kegembiraan tetap hidup dengan berbagai promosi berkelanjutan dan penawaran khusus. Promosi ini dapat mencakup peluang unik untuk mendapatkan hadiah tambahan, berpartisipasi dalam turnamen, atau menikmati bonus waktu terbatas. Dengan memperbarui dan mendiversifikasi promosi mereka secara berkala, platform ini memastikan bahwa para pemain selalu memiliki sesuatu yang baru dan menarik untuk dinantikan, menjaga pengalaman bermain game tetap segar dan menarik. Pemain dapat memanfaatkan promosi yang sedang berlangsung ini untuk memaksimalkan kemenangan mereka, menjelajahi game baru, dan membenamkan diri dalam dunia gameplay yang mendebarkan dan hadiah berlimpah.
donquixote_doflamingo_f61
1,910,950
avif2jpg.net - Free Online AVIF to JPG Converter for Universal Compatibility
In the world of next-gen image formats, compatibility can sometimes be a challenge. That's where...
0
2024-07-04T01:21:18
https://dev.to/hivegeekdev/avif2jpgnet-free-online-avif-to-jpg-converter-for-universal-compatibility-2g99
In the world of next-gen image formats, compatibility can sometimes be a challenge. That's where avif2jpg.net comes in – your solution for converting cutting-edge AVIF files back to the universally supported JPG format! πŸ”„ **Why Convert AVIF to JPG?** While AVIF offers excellent compression and quality, there are times when you need the universal compatibility of JPG: 1. **Wider Device Support**: Ensure your images are viewable on older devices and software. 2. **Social Media Compatibility**: Some platforms don't yet support AVIF uploads. 3. **Editing Flexibility**: Open and edit your images in any image editing software. 4. **Printing**: JPG remains the preferred format for many printing services. 5. **Legacy System Integration**: Seamlessly integrate images into systems that don't support AVIF. πŸš€ **Why Use avif2jpg.net?** 1. **Free and Easy to Use**: No registration required. Just upload, convert, and download. 2. **Instant Conversion**: Transform your AVIF files to JPG in seconds. 3. **Batch Processing**: Convert multiple images at once to save time. 4. **No Installation Needed**: Works right in your browser - no software to download. πŸ”§ **How to Use:** 1. Visit https://avif2jpg.net 2. Upload your AVIF image(s) 3. Adjust quality settings if desired 4. Click 'Convert' 5. Download your JPG file It's that simple! Ensure your images are accessible to everyone, regardless of their device or software. πŸ‘¨β€πŸ’» **Perfect For:** - Designers working with clients using older software - Social media managers ensuring wide platform compatibility - Photographers preparing images for print - Web developers creating fallback images - Anyone needing to share images widely 🌟 **Why Choose avif2jpg.net?** - Completely free - No account required - User-friendly interface - Fast and efficient - Secure: Your images are processed in your browser, not stored on our servers πŸ’‘ **Pro Tip**: While AVIF offers great benefits, it's always good to have a JPG version for universal compatibility. Use avif2jpg.net to create JPG versions of your AVIF files as needed! Ready to ensure your images are accessible everywhere? Try avif2jpg.net now and convert your AVIF files to universally compatible JPGs! https://avif2jpg.net?utm_source=devto
hivegeekdev
1,910,948
Developers Listen: If You Don't Have a Rich Domain Model, You Don't Leverage OOP
In the world of software development, Object-Oriented Programming (OOP) and SOLID principles are...
0
2024-07-04T01:10:00
https://dev.to/muhammad_salem/developers-listen-if-you-dont-have-a-rich-domain-model-you-dont-leverage-oop-aid
In the world of software development, Object-Oriented Programming (OOP) and SOLID principles are often touted as best practices for creating maintainable, extensible, and robust systems. However, a crucial aspect that's frequently overlooked is the context in which these principles truly shine: a rich domain model. Let's delve into why a rich domain model is essential for leveraging the full power of OOP and SOLID principles, and what we miss out on when we settle for an anemic domain model. ### The Rich Domain Model: A Fertile Ground for OOP and SOLID A rich domain model is characterized by entities that encapsulate both data and behavior. These entities are not mere data containers but active participants in the business logic of the application. This approach aligns perfectly with the core tenets of OOP and provides the ideal environment for applying SOLID principles. #### Polymorphism in Action In a rich domain model, different types of entities can implement shared interfaces or extend common base classes while providing their own specific behaviors. For instance, consider a parking lot system: ```csharp public abstract class Vehicle { public abstract decimal CalculateParkingFee(int hours); } public class Car : Vehicle { public override decimal CalculateParkingFee(int hours) { return hours * 2.5m; // Car parking fee logic } } public class Motorcycle : Vehicle { public override decimal CalculateParkingFee(int hours) { return hours * 1.5m; // Motorcycle parking fee logic } } public class Bus : Vehicle { public override decimal CalculateParkingFee(int hours) { return hours * 5m; // Bus parking fee logic } } ``` Here, polymorphism allows different vehicle types to provide their own fee calculation logic, promoting flexibility and reducing repetitive code. #### Inheritance for Code Reuse Common behaviors can be abstracted into base classes, promoting code reuse. For example, in a parking spot system: ```csharp public abstract class ParkingSpot { public string SpotId { get; set; } public bool IsOccupied { get; set; } public abstract void ParkVehicle(Vehicle vehicle); public abstract void RemoveVehicle(); } public class CompactSpot : ParkingSpot { public override void ParkVehicle(Vehicle vehicle) { // Parking logic for compact spot IsOccupied = true; } public override void RemoveVehicle() { // Logic to remove vehicle from compact spot IsOccupied = false; } } public class LargeSpot : ParkingSpot { public override void ParkVehicle(Vehicle vehicle) { // Parking logic for large spot IsOccupied = true; } public override void RemoveVehicle() { // Logic to remove vehicle from large spot IsOccupied = false; } } ``` This design allows for shared functionality in the base class, with specific behaviors defined in subclasses. #### Liskov Substitution Principle (LSP) in Practice With a rich domain model, we can design our class hierarchies to adhere to LSP. Any subclass of `Vehicle` should be usable wherever a `Vehicle` is expected, without breaking the system's behavior. This principle ensures that our object hierarchies are well-designed and behave consistently. ```csharp public class ParkingLot { private List<Vehicle> vehicles = new List<Vehicle>(); public void AddVehicle(Vehicle vehicle) { vehicles.Add(vehicle); } public void CalculateFees() { foreach (var vehicle in vehicles) { Console.WriteLine($"Parking fee: {vehicle.CalculateParkingFee(3)}"); } } } ``` In this example, any subclass of `Vehicle` can be added to the `ParkingLot`, and their respective `CalculateParkingFee` methods will be called correctly. #### Open/Closed Principle (OCP) for Extensibility A rich domain model allows us to extend functionality without modifying existing code. For example, adding a new vehicle type like `ElectricCar` can be done by creating a new subclass of `Vehicle`, without changing the core parking logic. ```csharp public class ElectricCar : Vehicle { public override decimal CalculateParkingFee(int hours) { return hours * 3m; // Electric car parking fee logic } } ``` The system is now extended to accommodate `ElectricCar` without modifying existing vehicle types or parking logic. #### Composition for Complex Behaviors Rich domain models often use composition to build complex entities from simpler ones. For instance, a `ParkingLot` entity might be composed of multiple `Level` objects, each containing multiple `ParkingSpot` objects, allowing for a modular and flexible design. ```csharp public class Level { public int LevelNumber { get; set; } public List<ParkingSpot> Spots { get; set; } = new List<ParkingSpot>(); } public class ParkingLot { public List<Level> Levels { get; set; } = new List<Level>(); } ``` This composition allows us to manage parking lots with varying levels and spots effectively. ### The Anemic Domain Model: A Missed Opportunity In contrast, an anemic domain model consists of entities that are little more than data structures, with behavior implemented in separate service classes. While this approach can work, it misses out on many of the benefits that OOP and SOLID principles offer. Entities are essentially data holders with getters and setters. Drawbacks: Limited use of OOP principles: Inheritance & Polymorphism: Less meaningful because domain logic resides elsewhere. **Limited OOP Us**e: In an anemic model with data-centric entities, there's less opportunity for inheritance and polymorphism. The focus is on data manipulation, not complex behavior. **SOLID Principles Not Violated (but not leveraged either)**: Since anemic entities have minimal logic, it's difficult to violate principles like Liskov Substitution (there's not much behavior to substitute). However, these principles also don't provide much benefit in this context. #### Limited Polymorphism With behavior separated from data, there's less opportunity to leverage polymorphism. Instead of different vehicle types implementing their own fee calculation methods, we might end up with a single service class with a large switch statement to handle different types. ```csharp public class ParkingFeeService { public decimal CalculateFee(Vehicle vehicle, int hours) { switch (vehicle) { case Car _: return hours * 2.5m; case Motorcycle _: return hours * 1.5m; case Bus _: return hours * 5m; default: throw new ArgumentException("Unknown vehicle type"); } } } ``` This approach is less flexible and harder to maintain. #### Reduced Encapsulation Anemic models often expose their internal state through getters and setters, violating the principle of encapsulation. This can lead to scattered business logic and increased coupling between components. ```csharp public class Vehicle { public string LicensePlate { get; set; } public int HoursParked { get; set; } } ``` Business logic is then handled externally, increasing complexity. #### Less Natural OCP Application Without rich behavior in entities, extending functionality often requires modifying existing service classes, violating the Open/Closed Principle. #### Underutilized Composition Anemic models tend to rely more on procedural code in services rather than leveraging the power of object composition to model complex domain relationships and behaviors. ### Conclusion: Embracing the Rich Domain Model While anemic domain models can be sufficient for simple CRUD applications, they fall short when dealing with complex business logic. By embracing rich domain models, developers can unlock the full potential of OOP and SOLID principles: - **Entities become more than just data carriers; they encapsulate behavior and truly represent domain concepts.** - **Polymorphism and inheritance can be leveraged to create flexible and reusable code structures.** - **The SOLID principles find natural applications, leading to more maintainable and extensible systems.** - **Complex domain logic can be expressed more clearly and intuitively through object interactions.** A rich domain model not only aligns better with OOP philosophy but also provides a solid foundation for building complex, maintainable software systems that can evolve with changing business needs. Remember, the goal of OOP is not just to group data and functions together, but to model the problem domain effectively. By giving your entities the behavior they deserve, you're not just writing code; you're crafting a software representation of your business domain that's powerful, flexible, and true to life.
muhammad_salem
1,910,947
why login not working after form submission ?
login.ts : `async onSubmit(): Promise { this.email = true; this.pwd = true; ...
0
2024-07-04T01:02:54
https://dev.to/samir_soupape/why-login-not-working-after-form-submission--4lli
angular, html, typescript
login.ts : `async onSubmit(): Promise<void> { this.email = true; this.pwd = true; this.mailform = true; this.isLoginInProgress.set(true); this._authService .login(this.form.controls.email.value.trim(), this.form.controls.password.value.trim()) .pipe( catchError((error: HttpErrorResponse) => { this.handleAuthError(error); return of(error); }), tap(response => this._handleLogin(response)), finalize(() => this.isLoginInProgress.set(false)) ) .subscribe({ error: (error) => { console.error('Login error:', error); } }); } private _handleLogin(response: any): void { if (!response?.user) return; this.login = true; const accessToken = response.user.accessToken; const user_uid = response.user.uid; localStorage.setItem('accessToken', accessToken); localStorage.setItem('user_uid', user_uid); window.location.reload(); this._router.navigateByUrl('/dashboard'); } handleAuthError(err: HttpErrorResponse): void { if (!err.error.code) return; this.authError.set(true); this.form.valueChanges .pipe( take(1), tap(() => this.authError.set(true)) ) .subscribe(); } ` login.html : `<form [formGroup]="form" (ngSubmit)="onSubmit()" class="form-detail"> </form>` authservice.ts : `login(email: string, password: string): Observable<any> { console.log('Logging in user:', email); const promise = signInWithEmailAndPassword(this.firebaseAuth, email, password) .then(response => { console.log('Login successful:', response); return response; }) .catch(error => { console.error('Login error:', error); throw error; }); return from(promise); } `
samir_soupape
1,910,946
VS Code settings that every front-end developer should know
In today's world of web development, Visual Studio Code (VS Code) is considered one of the most...
0
2024-07-04T00:59:22
https://dev.to/mhmd-salah/vs-code-settings-that-every-front-end-developer-should-know-2c1h
vscode, frontend, feather, advanced
In today's world of web development, Visual Studio Code (VS Code) is considered one of the most powerful and popular integrated development environments (IDEs) among developers. It is known for its great flexibility and support for a wide range of accessories that make the programming process an enjoyable and efficient experience. However, to get the most out of this powerful tool, developers need to customize its settings to suit their specific needs. Whether you are a novice developer or an expert in front-end development, properly adjusting your VS Code settings can greatly improve your productivity and make the development process much smoother. In this article, we will review a set of optimal settings that will help you improve your programming experience, by using tools such as ESLint, Prettier, Live Server and others. Get ready to discover how you can turn VS Code into an IDE that meets all your needs as a front-end developer. ### Steps to open Visual Studio Code: 1 - Make sure you have Visual Studio Code installed on your machine. If you don't have it, you can download it from the official website. 2 - Open the Command Palette: Press Ctrl+Shift+P (or Cmd+Shift+P on macOS) to open the command panel. 3 - Find JSON settings: In the command panel, start typing β€œPreferences: Open Settings (JSON)” until the option appears in the drop-down menu. Click this option to open the settings JSON file. 4 - Edit settings: The settings.json file will open, you can now copy and paste the settings you want to add or modify. Make sure to save the file after making your edits using Ctrl+S (or Cmd+S on macOS). > You can copy these settings ```json { "editor.tabSize": 2, "editor.insertSpaces": true, "editor.formatOnSave": true, "editor.codeActionsOnSave": { "source.fixAll.eslint": true }, "files.autoSave": "onFocusChange", "files.associations": { "*.js": "javascriptreact", "*.jsx": "javascriptreact" }, "eslint.alwaysShowStatus": true, "prettier.singleQuote": true, "prettier.semi": false, "prettier.trailingComma": "es5", "liveServer.settings.port": 5500, "liveServer.settings.root": "/", "liveServer.settings.ignoreFiles": [ ".vscode/**", "**/*.scss", "**/*.sass" ], "emmet.includeLanguages": { "javascript": "javascriptreact", "html": "html", "css": "css" }, "editor.quickSuggestions": { "strings": true }, "editor.suggestSelection": "first", "explorer.confirmDelete": false, "explorer.confirmDragAndDrop": false, "workbench.colorTheme": "Default Light+", "workbench.iconTheme": "vscode-icons" } ``` ## Explanation of settings: **editor.tabSize:** Set the tab size to 2 spaces. **editor.insertSpaces:** Use spaces instead of tabs. **editor.formatOnSave:** Automatically format code on save. **editor.codeActionsOnSave:** Automatically run ESLint fixes on save. **files.autoSave:** Auto save when focus changes. **files.associations:** Set .js and .jsx files as javascriptreact. **eslint.alwaysShowStatus:** Always show ESLint status. **prettier.*:** Prettier settings for code formatting. **liveServer.settings.*:** Live Server settings to run the project locally. **emmet.includeLanguages:** Include Emmet to support multiple languages. **editor.quickSuggestions:** Enable quick text suggestions. **editor.suggestSelection:** Selects the first suggestion by default. **explorer.confirmDelete:** Disable confirmation of deletion. **explorer.confirmDragAndDrop:** Disable drag-and-drop confirmation. **workbench.colorTheme:** Set the default color theme. **workbench.iconTheme:** Set the theme for the icons.
mhmd-salah
1,910,939
Creating Linux Users with a Bash Script
As a SysOps engineer, efficiently managing user accounts and their permissions is a critical task. In...
0
2024-07-04T00:21:21
https://dev.to/sottie/creating-linux-users-with-a-bash-script-1a7p
As a SysOps engineer, efficiently managing user accounts and their permissions is a critical task. In this article, I will walk you through a bash script that automates the creation of users and their associated groups on a Linux system. This script reads from a text file, generates random passwords, and logs all actions for easy tracking. **Script Overview** The script, create_users.sh, is designed to read a text file containing usernames and group names, create the users and their respective groups, set up home directories, and generate passwords securely. **Detailed Explanation** **Input File and Initial Setup** The script starts by checking if an input file is provided and sets up the necessary directories and files for logging and storing passwords securely. ``` if [ $# -ne 1 ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi input_file=$1 log_file="/var/log/user_management.log" password_file="/var/secure/user_passwords.csv" mkdir -p /var/secure chmod 700 /var/secure > $log_file > $password_file chmod 600 $password_file ``` **Generating Random Passwords** A function to generate a random password is defined. This function creates a 12-character password using a mix of alphanumeric characters. ``` generate_password() { < /dev/urandom tr -dc 'A-Za-z0-9' | head -c 12 } ``` **Processing the Input File** The script reads each line of the input file, extracting the username and groups. It removes any existing whitespace and checks if the user already exists. ``` while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) if id "$username" &>/dev/null; then echo "User $username already exists. Skipping..." | tee -a $log_file continue fi ``` **Creating Users and Groups** If the user does not exist, the script creates the user along with a personal group, generates a password, and sets the password for the user. It also logs these actions. ``` useradd -m -s /bin/bash -G "$username" "$username" echo "Created user $username with group $username" | tee -a $log_file password=$(generate_password) echo "$username:$password" | chpasswd echo "$username,$password" >> $password_file echo "Set password for user $username" | tee -a $log_file ``` **Adding Users to Additional Groups** If additional groups are specified, the script creates those groups if they do not already exist and adds the user to these groups. ``` if [ -n "$groups" ]; then IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $log_file fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a $log_file done fi done < "$input_file" ``` **Conclusion** This script simplifies the task of managing user accounts in a Linux environment, ensuring that all necessary actions are logged and passwords are stored securely. By automating user creation and group assignments, SysOps engineers can save time and reduce the potential for errors. Learn more about the HNG Internship program and how it empowers interns with practical tasks [Here](https://hng.tech/internship) and [Here](https://hng.tech/hire).
sottie
1,910,938
Adding parameters to my Serverless Web Scraper API
Third installment of the series where I build a serverless API to scrape the historic dates from the USCIS monthly visa bulletin pages.
27,940
2024-07-04T00:16:55
https://community.aws/content/2ikzo80ALbohAb7Uw9k9krqcnRJ/
amazonqdeveloper, terraform, lambda, dynamodb
--- title: "Adding parameters to my Serverless Web Scraper API" description: Third installment of the series where I build a serverless API to scrape the historic dates from the USCIS monthly visa bulletin pages. tags: amazonqdeveloper, terraform, lambda, dynamodb canonical_url: "https://community.aws/content/2ikzo80ALbohAb7Uw9k9krqcnRJ/" series: Building a Serverless Web Scraper with Amazon Q --- ## Introduction In the previous articles, I built a serverless API that scrapes the data from the USCIS monthly visa bulletins, and stores them in an Amazon DynamoDB table. After jumping in and [defining the whole infrastructure with Terraform](https://community.aws/content/2hsUp8ZV7UoQpVCApnaTOBljCai), [I realised that I should probably first ensure the code is working](https://community.aws/content/2ihOgUm8x9k15XZ6aBfvUTlwmNN). Now it is time to add some parameters to our function so that it will return the data from categories other than the hard-coded EB-3 one, with some help from [Amazon Q Developer](https://aws.amazon.com/developer/generative-ai/amazon-q/?trk=01b23196-71a4-403b-909c-65ddc571caa0&sc_channel=el) :) ## Updating the Scrape and Storage Logic Before we get to that though, we need to update the logic around the lookups and storing the data. Currently, it will loop over all the URLs on the [main page](https://travel.state.gov/content/travel/en/legal/visa-law0/visa-bulletin.html), extract the ones that contain `/visa-bulletin-for-`, and then do a lookup in the DynamoDB table `ProcessedURLS` to see if that URL was processed. Only the data from pages that are not in that database table will be scraped, added to a single collection, then that whole collection will be stored. We need to update the code so that it stores the data as it scrapes a URL, and before writing the processed entry - if there was an error, it currently will not reprocess that page. ## Fixing Up Processing and Storing Our Data To get started, [let's see what we need to change](#prompt-1). Ok, only a small change is needed to `lambda_handler` for the loop where we process and store the data, updated version is now: ```python # Scrape data from each visa bulletin page for link in visa_bulletin_links: if '2022' in link or '2023' in link or '2024' in link: # Check if the URL has been processed response = processed_urls_table.get_item(Key={'url': link}) if 'Item' in response: print(f"Skipping URL: {link} (already processed)") continue # Process the URL print(f"Processing URL: {link}") url = f"https://travel.state.gov{link}" url_data = scrape_visa_bulletin(url) data.extend(scrape_visa_bulletin(url_data)) # Store the data store_data(url_data) # Store the processed URL in DynamoDB processed_urls_table.put_item(Item={'url': link}) ``` Surprisingly little effort, so not much of a distraction. ## Defining Some Enums Looking at the existing parameters (with current defaults), I suspect an `enum` would work, but I don't know how this is done in Python: ```python def read_data_locally(data, filing_type = 'Final Date', category = '3rd', country = 'All Chargeability Areas Except Those Listed'): ``` The [response looks to be what I need (minus the incorrect country, plus the missing ones)](#prompt-2), so let's add some enums! ```python from enum import Enum class FilingType(Enum): FINAL_DATE = 'Final Date' DATES_FOR_FILING = 'Dates for Filing' class Category(Enum): FIRST = '1st' SECOND = '2nd' THIRD = '3rd' FOURTH = '4th' OTHER_WORKERS = 'Other Workers' class Country(Enum): ALL_AREAS = 'All Chargeability Areas Except Those Listed' CHINA = 'CHINA-mainland born' INDIA = 'INDIA' MEXICO = "MEXICO" PHILIPPINES = 'PHILIPPINES' ``` **Side-note:** Interesting how [Amazon Q](https://aws.amazon.com/q/?trk=07910bf4-f673-4ded-8aa5-ce30fb5e3fdc&sc_channel=el) autocompleted `PHILIPPINES` for me, I'm assuming it picked it up from the context of the linked URLs - I did a reboot of my laptop this morning and started a new chat, so don't think it would be previous chat context, but could be mistaken: ![Amazon Q Developer auto-completing "PHILIPPINES"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34h370hkn443dkdpq4la.png) Running the app now doesn't return any data, so I suspect that [I need to reference the enums differently](#prompt-3), and yup, I do. Just append `.value`. ## Adding Parameters Now we need to change the code so we can pass these values in. Currently, I'm running the code locally via `python3 local_test.py` which calls the `handler.py` code via: ```python mock_context = MockContext() mock_event = { "key1": "value1", "key2": "value2", # Add any other relevant data for your event } result = lambda_handler(mock_event, mock_context) ``` Hrm, this could be interesting, how would I pass the enum as a request? I could use integers, and do some kind of mapping between the number and the enum value, but honestly, I'm not sure how this works in Python. [Time to find out though!](#prompt-4) The first part of how I add them to the `event` seems reasonable: ```python mock_event = { 'queryStringParameters': { 'filing_type': 'FINAL_DATE', 'category': 'THIRD', 'country': 'ALL_AREAS' } } ``` I'm not quite sure that the way of extracting them looks right, adding in the 2nd parameter looks like hardcoding it to me: ```python query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', 'FINAL_DATE') country_str = query_params.get('country', 'ALL_AREAS') category_str = query_params.get('category', 'THIRD') ``` [After clarifying how the `get()` method works](#prompt-5), this does appear to be the correct way - that 2nd parameter is a default if it doesn't find the value in the input. Having it as a string bugs me - if I ever change the name of the enum, this will not work. After a bit of back and forth ([1](#prompt-6), [2](#prompt-7), [3](#prompt-8)) I had the following: ```python query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', FilingType.FINAL_DATE.name) country_str = query_params.get('country', Country.ALL_AREAS) category_str = query_params.get('category', Category.THIRD.name) # Convert string values to enum members filing_type = FilingType[filing_type_str.upper()] country = Country[country_str.upper()] category = Category[category_str.upper()] ... # Updated call to read_data() with the parameters eb3_data = read_data(filing_type, category, country) ``` Additionally, I removed the hardcoded defaults on `read_data()` and `read_data_locally()`. Running the code locally returns the data as expected, I also checked with `'category': 'SECOND',` just to make sure. ## Splitting the Scrape and Retrieve Methods We are now at a point where the code does what we want, except that every time we retrieve data, it would also loop through all the bulletins and try to process them. Ideally this should be split into 2 separate functions, with a way to call the code to scrape any new bulletins one a schedule. Before I try to find the answer, I'm thinking I could create a 2nd functions similar to `lambda_handler(event, context)`, and then set up a 2nd Lambda function using the same Lambda layer for the `requirements`. It does mean I will include some libraries not needed by each function as it will contain **all** the ones across both functions, but I'm fine with that approach. Splitting this into 2 different projects, or even just splitting the `requirements.txt` file feels like even more over the top than we are already are. ![Meme with 2 buttons showing a choice between 2 functions, same file and 2 functions, separate files, with a stressed, sweating man looking at the buttons subtitled by Defining moment in my software career](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9u5m56wjoz3qs338iq8.jpg) The [suggested approach](#prompt-9) is to split the project into 2 files, `scraper.py` and `retriever.py`, and then to have a `lambda_handler` function in each. I'm tempted to just create a 2nd function inside the 1 file, but let's go with the suggestion and split it into those 2 files. After I rename `handler.py` to `scraper.py` and create a copy named `retriever.py`, I realise we will need to split it further. At the top of the code, we set the tables via `table = dynamodb.Table('VisaBulletinData')` and `processed_urls_table = dynamodb.Table('ProcessedURLs')`, and we also have the enums defined as classes, and both of the functions need them. I [look how I would do that](#prompt-10), and merrily follow the suggestion. After I remove the code not needed in each of the new functions, each in their own file, extract the enums into `enums.py`, and add the import statement, I stare at the import line in `scraper.py` for a few seconds: ![Python import statements with the ones from the enums file in a darker color](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9lvpihpheqpyfbuvnzj.png) Why are the 3 classes I import the darker colour of an unused import? Aaaaah! While I use the enums to retrieve the data, I never use them to store the data. I suddenly realise that I should probably change the code to use them as a mapping from the text in the bulletins. I noticed in some of the older ones that the 5th category was split less granularly, so we would need to address that as well. ## Taking a Step Back I've been building this app over the last 2 - 3 weeks spending an hour here and there between other work to build it out. Initially I intended to sit down and build it out in a day or 2, and expected it to not take any longer. I started out with the intention that I wanted to use a Lambda function, store the data in DynamoDB, and then be able to query it. Putting on my Captain Obvious cape, I realise that this is a lot of additional complexity for a problem I could have solved with a spreadsheet where I paste in a new row or two once a month. As the saying goes: > If all you have is a hammer, then all your problems look like a nail. Ok, new plan. I'm going to continue down this path for only a short while longer till I have the Lambda functions deployed, and I can call them. ## Updating the Infrastructure Since we now have 2 separate functions, we also need to deploy them. This requires changing the existing Lambda since it referenced `handler.py`. Pasting all the Terraform resources again would take up quite a bit of space, but you can look at the [current version](https://github.com/build-on-aws/uscis-priority-date-tracker/blob/article-2/app.tf#L30-L150) at this point in time. We can keep the following resources as-is: 1. `resource "null_resource" "pip_install"` - we aren't splitting `requirements.txt` per source file, so a single one stays the same. 2. `data "archive_file" "layer"` - used to trigger updating the Lambda layer we create for our dependencies. 3. `resource "aws_lambda_layer_version" "layer"` - creates the Lambda layer. 4. `data "archive_file" "app"` combined with the line `source_code_hash = data.archive_file.app.output_base64sha256` to ensure we update the functions for any code changes. This will trigger for both functions even if we only update one of them, but I don't feel it is worth the effort for this project. I do want to ensure we provide the least amount of IAM permissions per function, so will duplicate the existing IAM role, policy, and attachments, and then reduce the scope of each IAM policy for each function with only the access needed. The last step will be to define a 2nd Lambda function after updating the existing one to use the renamed source file. Sorry, I lied, the last-last step will also be to update the DynamoDB table names to use the environment variables defined in the Lambda function, with a fallback if they are not set. ### Adding the Environment Variables for DynamoDB Table Names This change requires updating the Terraform resource for our Lambda function, below is the updated `scraper.py` function: ```hcl # Define the Lambda function resource "aws_lambda_function" "visa_bulletin_scraper" { function_name = "visa-bulletin-scraper" handler = "scraper.lambda_handler" # <--- I've updated the handler's filename as well runtime = "python3.12" filename = data.archive_file.app.output_path source_code_hash = data.archive_file.app.output_base64sha256 role = aws_iam_role.lambda_role.arn layers = [aws_lambda_layer_version.layer.arn] environment { variables = { BULLETIN_DATA = aws_dynamodb_table.visa_bulletin_data.name, PROCESSED_BULLETIN_URLS = aws_dynamodb_table.processed_urls.name } } } ``` We also need to change the currently hard-coded table names at the top of `scraper.py` to the following to [use the environment variables, with a fallback value](#prompt-11): ```python import os # <--- new import added ... # Initialize DynamoDB client dynamodb = boto3.resource('dynamodb') table_name = os.environ.get('BULLETIN_DATA', 'VisaBulletinData') table = dynamodb.Table(table_name) processed_urls_table_name = os.environ.get('PROCESSED_BULLETIN_URLS', 'ProcessedURLs') processed_urls_table = dynamodb.Table(processed_urls_table_name) ``` For `retriever.py`, we only need to add the `VisaBulletinData` via `BULLETIN_DATA`. ### Create a 2nd Lambda Function As mentioned above, we will reuse the same Lambda layer, and only create a 2nd function for `retriever.py`, along with its own IAM policy and role. While doing this, I notice that past-Cobus was lazy with the resource names: ```hcl resource "aws_iam_role" "lambda_role" { name = "visa-bulletin-scraper-role" ... } # Attach the necessary IAM policies to the role resource "aws_iam_policy_attachment" "lambda_basic_execution" { name = "lambda_basic_execution" policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" roles = [aws_iam_role.lambda_role.name] } resource "aws_iam_policy_attachment" "dynamodb_access" { name = "dynamodb_access" policy_arn = aws_iam_policy.dynamodb_access_policy.arn roles = [aws_iam_role.lambda_role.name] } # Define the IAM policy for DynamoDB access resource "aws_iam_policy" "dynamodb_access_policy" { ... } ``` We now need to duplicate the Lambda function resource, along with reduced permissions for the IAM role needed by `retriever.py`. It is also a good time to split out our Terraform resources into smaller files to make it easier to find each one of them. I decided to split them from the single `app.tf` into the following: 1. `dynamodb.tf` - defines the 2 tables we use. 2. `lambda_layer.tf` - builds and creates the Lambda layer used by both functions. 3. `lambda_zip.tf` - zips up all the application code into a single `.zip` used by both functions. 4. `lambda_scraper.tf` - defines the Lambda function, IAM policy, and IAM role for `scraper.py`. 5. `lambda_retriever.tf` - defines the Lambda function, IAM policy, and IAM role for `retriever.py`. After making these changes, and running `terraform apply`, the following error is returned: ```text β”‚ Error: creating IAM Role (visa-bulletin-scraper-role): operation error IAM: CreateRole, https response error StatusCode: 409, RequestID: 94248aee-5615-4fbf-936e-81ceb9c24f0f, EntityAlreadyExists: Role with name visa-bulletin-scraper-role already exists. β”‚ β”‚ with aws_iam_role.scraper_role, β”‚ on lambda_scraper.tf line 20, in resource "aws_iam_role" "scraper_role": β”‚ 20: resource "aws_iam_role" "scraper_role" { β”‚ β•΅ β•· β”‚ Error: creating IAM Policy (visa-bulletin-scraper-dynamodb-access): operation error IAM: CreatePolicy, https response error StatusCode: 409, RequestID: da27e143-5a7e-4008-81a8-49bf23fb99d8, EntityAlreadyExists: A policy called visa-bulletin-scraper-dynamodb-access already exists. Duplicate names are not allowed. β”‚ β”‚ with aws_iam_policy.scraper_dynamodb_access_policy, β”‚ on lambda_scraper.tf line 53, in resource "aws_iam_policy" "scraper_dynamodb_access_policy": β”‚ 53: resource "aws_iam_policy" "scraper_dynamodb_access_policy" { β”‚ β•΅ ``` Since I renamed the `aws_iam_role` resource from `lambda_role` to `scraper_role`, we ran into a race condition where the delete for the old one didn't complete before the new one's create started. [IAM role names need to be unique](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html), and this is why we encountered this issue. Running `terraform apply` a second time will fix this - worth keeping in mind if you ever run into this after doing a cleanup. ```text Apply complete! Resources: 4 added, 1 changed, 0 destroyed. ``` **Pro-tip:** `terraform fmt` will format your `.tf` files in the current directory and fix the indentation, be kind and ~rewind~ clean up your source files before you commit them (unlike past-Cobus who also forgot that step). ### But Does It Work > _If you deploy a Lambda function, but you never call it, does it even exist?_ In theory, we now have our 2 Lambda functions that would scrape or return the data. We haven't called them yet though, so [let's see how we can call them from our terminal](#prompt-12): ```bash aws lambda invoke \ --function-name visa-bulletin-retriever \ --cli-binary-format raw-in-base64-out \ --payload '{"queryStringParameters": {"filing_type": "FINAL_DATE", "category": "THIRD", "country": "ALL_AREAS"}}' \ response.json ``` Which returns the lovely error: ```json { "StatusCode": 200, "FunctionError": "Unhandled", "ExecutedVersion": "$LATEST" } ``` ### No It Does Not Using the AWS Console to test the `retriever` function with the same payload as above, I see the issue: !["AWS Console showing the error when running the Lambda function being [ERROR] NameError: name 'Enum' is not defined"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45t01twjvj5e0h82dzg9.png) When splitting out the enums into their own source file, I forgot to also add the `import` statement. After fixing that, I run `terraform apply` again, and I'm quite curious to see what the change looks like since we're using Lambda layers. The following change to the IAM role makes me suspicious: ```text # aws_iam_policy_attachment.retriever_lambda_basic_execution will be updated in-place ~ resource "aws_iam_policy_attachment" "retriever_lambda_basic_execution" { id = "retriever_lambda_basic_execution" name = "retriever_lambda_basic_execution" ~ roles = [ - "visa-bulletin-scraper-role", # (1 unchanged element hidden) ] # (3 unchanged attributes hidden) } # aws_iam_policy_attachment.scraper_lambda_basic_execution will be updated in-place ~ resource "aws_iam_policy_attachment" "scraper_lambda_basic_execution" { id = "scraper_lambda_basic_execution" name = "scraper_lambda_basic_execution" ~ roles = [ - "visa-bulletin-retriever-role", # (1 unchanged element hidden) ] # (3 unchanged attributes hidden) } ``` This rings a bell from years ago, but I can't quite remember, so off I go and do a search for `aws_iam_policy_attachment` so I can look at the [documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment). Right at the top of the page, it has a big, red warning: !["Terraform documentation for iam_policy_attachment with a warning that this resource creates exclusive attachments of IAM policies](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hq8wdha9xuszqk9fkicu.png) Aha! _🎡 It's all coming back to me nooooow! 🎡_ This was similar to setting the security group rules inside the [`aws_security_group`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) - in the case of `aws_iam_policy_attachment`, it will play whack-a-mole since we need to attach that policy to 2 different IAM roles. On the first run, one of them will end up succeeding, but for future runs, it will try to attach it again since whichever one finished last would remove the one that finished first. To fix this, we need to use `aws_iam_role_policy_attachment` instead: ```hcl resource "aws_iam_role_policy_attachment" "retriever_lambda_basic_execution" { role = aws_iam_role.retriever_role.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" } ``` Updates done, and ran `terraform apply` again, only for it to timeout after 2 minutes with: ```text β•· β”‚ Error: reading IAM Role Policy Attachment (visa-bulletin-retriever-role:arn:aws:iam::631077426493:policy/visa-bulletin-retriever-dynamodb-access): empty result β”‚ β”‚ with aws_iam_role_policy_attachment.retriever_dynamodb_access, β”‚ on lambda_retriever.tf line 45, in resource "aws_iam_role_policy_attachment" "retriever_dynamodb_access": β”‚ 45: resource "aws_iam_role_policy_attachment" "retriever_dynamodb_access" { β”‚ β•΅ β•· β”‚ Error: reading IAM Role Policy Attachment (visa-bulletin-scraper-role:arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole): empty result β”‚ β”‚ with aws_iam_role_policy_attachment.scraper_lambda_basic_execution, β”‚ on lambda_scraper.tf line 40, in resource "aws_iam_role_policy_attachment" "scraper_lambda_basic_execution": β”‚ 40: resource "aws_iam_role_policy_attachment" "scraper_lambda_basic_execution" { β”‚ β•΅ β•· β”‚ Error: reading IAM Role Policy Attachment (visa-bulletin-scraper-role:arn:aws:iam::631077426493:policy/visa-bulletin-scraper-dynamodb-access): empty result β”‚ β”‚ with aws_iam_role_policy_attachment.scraper_dynamodb_access, β”‚ on lambda_scraper.tf line 45, in resource "aws_iam_role_policy_attachment" "scraper_dynamodb_access": β”‚ 45: resource "aws_iam_role_policy_attachment" "scraper_dynamodb_access" { β”‚ ``` The hint here is the `empty result` - when you try to attach a policy the 2nd time, it will not return an error if it is already attached. So we have another race condition (I think): the request to remove the attachment (since we replaced the `aws_iam_policy_attachment` with `aws_iam_role_policy_attachment`) was was made in parallel to the new one in an order where it didn't return a successful response for the new one. At least this is what I would speculate without digging into it too much. Regardless, you can get around this issue by just running `terraform apply` a 2nd time. ### Let's Try Again Second time is the charm, right? Running our `aws invoke-lambda` command again returns: ```text { "StatusCode": 200, "ExecutedVersion": "$LATEST" } (END) ``` And then it will wait for you to exit the output view - this is the standard behaviour, and you can exit it via `ctrl + c` or `cmd + c`. Alternatively, you can add the `--no-cli-pager` parameter: ```bash aws lambda invoke \ --function-name visa-bulletin-retriever \ --cli-binary-format raw-in-base64-out \ --payload '{"queryStringParameters": {"filing_type": "FINAL_DATE", "category": "THIRD", "country": "ALL_AREAS"}}' \ --no-cli-pager \ response.json ``` Looking at the contents in `response.json`, we can see the data! ```json { "statusCode": 200, "body": [ { "date": "2021-12-01", "filing_type": "Final Date", "bulletin_date": "2024-07-01", "category": "3rd", "sk": "BULLETIN_DATE#2024-07-01", "pk": "FILING_TYPE#Final Date#CATEGORY#3rd#COUNTRY#All Chargeability Areas Except Those Listed", "country": "All Chargeability Areas Except Those Listed" }, ... ``` While the data is all there, it isn't in a very readable format - we are only really interested in the `bulletin_date` along with the `date` as a sorted list since all the rest of the data is what we sent in to filter on. ### Making the Response Better A better response would be one where it shows the `filing_type`, `category`, and `country` once, and then have a list of key-value pairs for `bulletin_date` and `date`. [It looks like we can do this by changing the return statement](#prompt-13) to the following: ```python response = { 'filing_type': filing_type, 'category': category, 'country': country, 'data': data } return { 'statusCode': 200, 'body': response } ``` This does assume that the response from `read_data` is in the format that we need, which is that key-value pair list. We need to also update the `return` of `read_data` to accomplish this to the following: ```python ... date_list = [] for item in sorted_items: date = item['date'] bulletin_date = item['bulletin_date'] date_list.append({"bulletin_date":bulletin_date, "date": date}) return date_list ``` And another `terraform apply` later, it is deployed. You can also see from the Terraform output below that it only replaced the Lambda function code, not the Lambda layer as we didn't add any additional dependencies - `json` is built into Python: ```text Terraform will perform the following actions: # aws_lambda_function.visa_bulletin_retriever will be updated in-place ~ resource "aws_lambda_function" "visa_bulletin_retriever" { id = "visa-bulletin-retriever" ~ last_modified = "2024-07-03T22:28:49.000+0000" -> (known after apply) ~ source_code_hash = "BmwRVIejGX4B1ual5JdhfulogEDuyTxyW9/3/g95WBE=" -> "eTHf0nHRxTy/Pi4u+tOvpKATsPWqagYcD3rEl0wwwcQ=" tags = {} # (27 unchanged attributes hidden) # (4 unchanged blocks hidden) } # aws_lambda_function.visa_bulletin_scraper will be updated in-place ~ resource "aws_lambda_function" "visa_bulletin_scraper" { id = "visa-bulletin-scraper" ~ last_modified = "2024-07-03T22:29:02.000+0000" -> (known after apply) ~ source_code_hash = "BmwRVIejGX4B1ual5JdhfulogEDuyTxyW9/3/g95WBE=" -> "eTHf0nHRxTy/Pi4u+tOvpKATsPWqagYcD3rEl0wwwcQ=" tags = {} # (27 unchanged attributes hidden) # (4 unchanged blocks hidden) } Plan: 0 to add, 2 to change, 0 to destroy. ``` The `body` of the response will be JSON-encoded, so if we used this function as an API, we would need to deserialize the `body` into an object to access the data. The contents of `response.json` looks as follows: ![Terminal showing a json response object as defined further up in this article](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1v8a433yhmx8bifpdk3i.png) ## Wrapping Up As with most things, this took quite a bit longer than expected. The intention was just to add some parameters to the Lambda function for us to call, but we had to take a few detours along the way. As mentioned further up, I think I'm over-complicating this just a _little_ bit, but I'm still enjoying myself, so I'll do one more article tomorrow to set up a simple CI/CD pipeline that will deploy any future changes to this little app. Oh, and I need to set up a daily schedule to call the `scraper` function to auto-update if a new file is found. Actually, what would be really nice to also include a notification if it finds a new bulletin - today is 3 July 2024, and I'm already refreshing the page to see when the draft of the August bulletin will be published... (PS: By "tomorrow", I mean Friday 5 July as tomorrow is a public holiday here in the US.) ## Final Code As for the previous piece, I've tagged this with `article-3`, and pushed the tag. > [Source code for this article.](https://github.com/build-on-aws/uscis-priority-date-tracker/tree/article-3) --- ## Prompts used in this article ### Prompt 1 Can you please show me how to update the highlighted code so that it will store the data after processing the URL, but before it saves the URL via `processed_urls_table.put_item`? **Response:** Sure, here's how you can modify the highlighted code to store the data after processing the URL, but before saving the URL via `processed_urls_table.put_item`: ```python for link in visa_bulletin_links: if '2022' in link or '2023' in link or '2024' in link: # Check if the URL has been processed response = processed_urls_table.get_item(Key={'url': link}) if 'Item' in response: print(f"Skipping URL: {link} (already processed)") continue # Process the URL print(f"Processing URL: {link}") url = f"https://travel.state.gov{link}" url_data = scrape_visa_bulletin(url) data.extend(url_data) # Store the data store_data(url_data) # Store the processed URL in DynamoDB processed_urls_table.put_item(Item={'url': link}) store_data(data) ``` Here's what's changed: 1. After scraping the visa bulletin data from the URL using `scrape_visa_bulletin(url)`, the scraped data is stored in a new variable `url_data`. 2. The `url_data` is then appended to the `data` list using `data.extend(url_data)`. 3.A new line `store_data(url_data)` is added to store the data scraped from the current URL. After storing the data, the URL is marked as processed in DynamoDB using `processed_urls_table.put_item(Item={'url': link})`. This way, the data is stored immediately after being scraped from the URL, and before the URL is marked as processed in DynamoDB. Note: This assumes that the `store_data` function is defined elsewhere in your code and correctly handles storing the data in the desired location (e.g., a database, a file, etc.). --- ### Prompt 2 Looking at the highlighted line, does Python have an equivalent of C#'s enum? What would the approach be in Python to use something similar? I would like to define an easier way to provide the `filing_type`, `category`, and `country`? **Response:** Yes, Python has an equivalent to C#'s enum called the `enum` module, which was introduced in Python 3.4. It allows you to define a set of symbolic names (members) bound to unique, constant values. This can make your code more readable and less prone to errors caused by typos or invalid values. [1] Here's an example of how you could use the `enum` module to define the possible values for `filing_type`, `category`, and `country` in your `read_data_locally` function: ```python from enum import Enum class FilingType(Enum): FINAL_DATE = 'Final Date' DATES_FOR_FILING = 'Dates for Filing' class Category(Enum): FIRST = '1st' SECOND = '2nd' THIRD = '3rd' OTHER_WORKERS = 'Other Workers' class Country(Enum): ALL_AREAS = 'All Chargeability Areas Except Those Listed' CHINA_MAINLAND = 'China-mainland born' EL_SALVADOR = 'El Salvador, Guatemala, Honduras' # Add more countries as needed def read_data_locally(data, filing_type=FilingType.FINAL_DATE, category=Category.THIRD, country=Country.ALL_AREAS): # Your function logic here pass ``` In this example, we define three enum classes: `FilingType`, `Category`, and `Country`. Each class has members representing the valid values for that category. When calling `read_data_locally`, you can now use the enum members instead of strings, like `filing_type=FilingType.FINAL_DATE`. This has several advantages: 1. **Type safety** : The function will only accept values from the defined enums, preventing typos or invalid values. 2. **Readability** : The code becomes more self-documenting and easier to understand. 3. **Consistency** : You ensure that the same values are used throughout your codebase. Note that you can also assign custom values to enum members if needed, and you can add methods and properties to the enum classes. Sources: 1. [Is there any way to define the enum in python?](https://stackoverflow.com/questions/16928491/is-there-any-way-to-define-the-enum-in-python) --- ### Prompt 3 Using the enum values in this way doesn't return any data, could you please explain how I would access the string value for each enum in `filtered_data` code? **Response:** To access the string value of an enum member in the `filtered_data` code, you can use the `value` attribute of the enum member or call the `str()` function on the member. Here's an example of how you can modify the `read_data_locally` function to filter the data based on the string values of the enum members: ```python def read_data_locally(data, filing_type=FilingType.FINAL_DATE, category=Category.THIRD, country=Country.ALL_AREAS): # Filter the data based on filing_type, category, and country filtered_data = [entry for entry in data if entry['filing_type'] == filing_type.value and entry['category'] == str(category) and entry['country'] == country.value] # Sort the filtered data in descending order by bulletin_date sorted_data = sorted(filtered_data, key=itemgetter('bulletin_date'), reverse=True) # Print the sorted data for entry in sorted_data: print(f"Bulletin Date: {entry['bulletin_date']}, Date: {entry['date']}") return sorted_data ``` In this modified version: 1. For `filing_type`, we use `filing_type.value` to get the string value of the enum member. 2. For `category`, we use `str(category)` to get the string representation of the enum member. 3. For `country`, we use `country.value` to get the string value of the enum member. This way, the `filtered_data` list will contain only the entries where the `filing_type`, `category`, and `country` values match the string values of the corresponding enum members. Note that when you call the `read_data_locally` function, you can still use the enum members directly, like `read_data_locally(data, filing_type=FilingType.FINAL_DATE, category=Category.THIRD, country=Country.ALL_AREAS)`. By using the `value` attribute or `str()` function, you can access the string representation of the enum members, which is necessary for filtering the data based on the string values stored in the dictionary entries. --- ### Prompt 4 Thanks, how would I add the names of the enums to the event when calling `lambda_handler` - I want to be able to inject it into the the mock_event object for local testing, but also be able to call the Lambda using query string parameters? It should then insert the values from the event when calling `read_data()`. **Response:** To add the names of the enums to the event when calling `lambda_handler`, you can modify the function to accept the enum values as query string parameters or as part of the event payload. Then, you can use these values when calling `read_data()`. Here's an example of how you can modify the `lambda_handler` function to accept the enum values as query string parameters: ```python import urllib.parse def lambda_handler(event, context): # ... (existing code) # Parse query string parameters query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', 'FINAL_DATE') country_str = query_params.get('country', 'ALL_AREAS') # Convert string values to enum members filing_type = FilingType[filing_type_str.upper()] category = Category[category_str.upper()] country = Country[country_str.upper().replace(' ', '_')] # Call read_data with the enum values eb3_data = read_data(filing_type=filing_type, category=category, country=country) return { 'statusCode': 200, 'body': eb3_data } ``` In this modified version: The `query_params` dictionary is created from the `queryStringParameters` in the event object. The `filing_type_str`, `category_str`, and `country_str` values are retrieved from the `query_params` dictionary, with default values provided if the parameters are not present. The string values are converted to their corresponding enum members using the `FilingType`, `Category`, and `Country` enums. The `upper()` method is used to handle case-insensitive values, and the `replace(' ', '_')` is used to handle spaces in the country string. The `read_data` function is called with the enum values. To test this locally, you can create a `mock_event` object with the desired query string parameters: ```python mock_event = { 'queryStringParameters': { 'filing_type': 'FINAL_DATE', 'category': 'THIRD', 'country': 'ALL_AREAS' } } result = lambda_handler(mock_event, None) print(result) ``` When invoking the Lambda function, you can pass the query string parameters in the URL, like `https://your-lambda-function.aws/path?filing_type=FINAL_DATE&category=THIRD&country=ALL_AREAS`. If you prefer to pass the enum values as part of the event payload instead of query string parameters, you can modify the `lambda_handler` function accordingly: ```python def lambda_handler(event, context): # ... (existing code) # Parse query string parameters query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', 'FINAL_DATE') country_str = query_params.get('country', 'ALL_AREAS') # Convert string values to enum members filing_type = FilingType[filing_type_str.upper()] category = Category[category_str.upper()] country = Country[country_str.upper().replace(' ', '_')] # Call read_data with the enum values eb3_data = read_data(filing_type=filing_type, category=category, country=country) return { 'statusCode': 200, 'body': eb3_data } ``` In this case, you would pass the enum values as part of the event payload when invoking the Lambda function. --- ### Prompt 5 Can you please explain how the line `category_str = query_params.get('category', 'THIRD')` works? Is the 2nd parameter "THIRD" a default value if it can't extract it? **Response:** Yes, you're correct. The line `category_str = query_params.get('category', 'THIRD')` is using the `get()` method of a dictionary (or dictionary-like object) to retrieve the value associated with the key `'category'`. The `get()` method takes two arguments: 1. The key to look up in the dictionary 2. An optional default value to return if the key is not found In this case, `query_params` is likely a dictionary containing query parameters extracted from a URL or form data. The `get('category', 'THIRD')` part is attempting to retrieve the value associated with the `'category'` key from `query_params`. If the `'category'` key exists in `query_params`, its associated value will be returned and assigned to `category_str`. However, if the `'category'` key is not present in `query_params`, the default value `'THIRD'` will be used instead and assigned to `category_str`. So in summary, yes - the second argument `'THIRD'` is acting as a default value that will be used for `category_str` if the `'category'` key cannot be found in `query_params`. --- ### Prompt 6 If I wanted to replace the string `FINAL_DATE` with the name of the enum, how would I do that? Would something like `FilingType.FINAL_DATE.str()` work? **Response:** In Python, you can access the value of an enum member directly using its name. There is no need for a separate method like `str()`. You can simply use `FilingType.FINAL_DATE.value` to get the string value associated with the `FINAL_DATE` member of the `FilingType` enum. Here's an example of how you can replace the string `'FINAL_DATE'` with the value of the `FilingType.FINAL_DATE` enum member: ```python filing_type_str = query_params.get('filing_type', FilingType.FINAL_DATE.value) ``` In this line, if the `'filing_type'` key is not present in `query_params`, the value of `FilingType.FINAL_DATE.value` (which is `'Final Date'`) will be used as the default value for `filing_type_str`. Similarly, you can use the same approach for the `country_str` and `category_str` assignments: ```python filing_type_str = query_params.get('filing_type', FilingType.FINAL_DATE.value) country_str = query_params.get('country', Country.ALL_AREAS.value) category_str = query_params.get('category', Category.THIRD.value) ``` By using the `value` attribute of the enum members, you can directly access the string values associated with each enum member, without needing any additional methods like `str()`. --- ### Prompt 7 Thanks, but wouldn't that set the value of the enum instead of the name since further down we look up the value of the enum via `filing_type = FilingType[filing_type_str.upper()]`? **Response:** You're absolutely right, my previous suggestion would not work correctly in this case. Since you are using the string value to look up the corresponding enum member later on with `FilingType[filing_type_str.upper()]`, you need to use the string name of the enum member as the default value, not the `value` attribute. [1] To replace the string `'FINAL_DATE'` with the name of the `FilingType.FINAL_DATE` enum member, you can use the following: ```python filing_type_str = query_params.get('filing_type', 'FINAL_DATE') ``` This way, if the `'filing_type'` key is not present in `query_params`, the string `'FINAL_DATE'` will be assigned to `filing_type_str`. Then, when you do `filing_type = FilingType[filing_type_str.upper()]`, it will correctly look up the `FilingType.FINAL_DATE` member. Similarly, for the other enum types, you can use the string names of the desired default enum members: ```python filing_type_str = query_params.get('filing_type', 'FINAL_DATE') country_str = query_params.get('country', 'ALL_AREAS') ``` This way, the default values will be the string names of the enum members, which can then be used to look up the corresponding enum members correctly. Sources: 1. [Entity Framework(EF) Enum issue/bug/feature - DEV Community](https://dev.to/tsotsi1/entity-frameworkef-enum-issuebugfeature-4o65) --- ### Prompt 8 This isn't quite what I'm looking for. I want a way to avoid using a hard-coded string for the enum key `FINAL_DATE`, and instead somehow reference `FilingType.FINAL_DATE` so that if I decide to rename that specific enum value to e.g. `FINAL_DATE_NEW`, I would get an error when trying to the code as `FilingDate.FINAL_DATE` would no longer exist. **Response:** I understand now. You want to avoid hard-coding the string name of the enum member and instead reference the enum member itself, so that if the enum member name changes, you'll get an error at compile/run time. To achieve this, you can use the `name` attribute of the enum member. The `name` attribute returns the string name of the enum member. Here's how you can modify the code: ```python from enum import Enum class FilingType(Enum): FINAL_DATE = 'Final Date' # other enum members default_filing_type = FilingType.FINAL_DATE filing_type_str = query_params.get('filing_type', default_filing_type.name) filing_type = FilingType[filing_type_str.upper()] ``` In this example: 1. We define a `default_filing_type` variable and assign it the `FilingType.FINAL_DATE` enum member. 2. When getting the `filing_type_str` from `query_params`, we use `default_filing_type.name` as the default value. This way, if you rename the `FilingType.FINAL_DATE` enum member to something else (e.g., `FilingType.FINAL_DATE_NEW`), you'll get an error when trying to access `FilingType.FINAL_DATE`, because it no longer exists. The `name` attribute of the enum member is a string, so it can be used as the default value for `filing_type_str`. Later, when you look up the enum member using `FilingType[filing_type_str.upper()]`, it will work as expected, because you're using the string name of the enum member. This approach ensures that if you rename or remove an enum member, you'll get an error at compile/run time, rather than silently using an incorrect hard-coded string value. --- ### Prompt 9 The current lambda_handler function has 2 purposes, the first is to scrape the dates from the USCIS website and store them in the database, and the 2nd is to return the data based on the parameters specified. What is the best approach given the current structure (Python for the code, and Terraform for the infrastructure) to split this into 2 different functions? **Response:** To split the current `lambda_handler` function into two separate functions, you can follow these steps: [1] 1. **Create a new Lambda function for scraping and storing data** In your Terraform configuration, create a new AWS Lambda Function resource. This function will be responsible for scraping the USCIS website and storing the data in the database. [2] ```hcl resource "aws_lambda_function" "scraper_function" { filename = data.archive_file.scraper_lambda_zip.output_path function_name = "uscis-scraper" role = aws_iam_role.lambda_exec_role.arn handler = "scraper.lambda_handler" runtime = "python3.9" } ``` 2. **Create a new Lambda function for retrieving data** Create another AWS Lambda Function resource in your Terraform configuration. This function will be responsible for retrieving data from the database based on the provided parameters. ```hcl resource "aws_lambda_function" "retriever_function" { filename = data.archive_file.retriever_lambda_zip.output_path function_name = "uscis-retriever" role = aws_iam_role.lambda_exec_role.arn handler = "retriever.lambda_handler" runtime = "python3.9" } ``` 3. **Separate the Python code** In your Python code, create two separate files: `scraper.py` and `retriever.py`. `scraper.py` will contain the code for scraping the USCIS website and storing the data in the database. The `lambda_handler` function in this file will be responsible for this task. `retriever.py` will contain the code for retrieving data from the database based on the provided parameters. The `lambda_handler` function in this file will be responsible for this task. 4. **Update the Lambda function handlers** In your Terraform configuration, update the `handler` argument for each Lambda function to point to the correct file and function name. [3] ```hcl resource "aws_lambda_function" "scraper_function" { # ... handler = "scraper.lambda_handler" } resource "aws_lambda_function" "retriever_function" { # ... handler = "retriever.lambda_handler" } ``` 5. **Configure triggers and permissions** Depending on your use case, you may want to configure triggers or permissions for each Lambda function. For example, you could set up a CloudWatch Event Rule to trigger the scraper function periodically, or configure API Gateway to invoke the retriever function based on HTTP requests. Sources: 1. [Define Lambda function handler in Python - AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html) 2. [Understanding Terraform functions, expressions, and meta-arguments - AWS Prescriptive Guidance](https://docs.aws.amazon.com/prescriptive-guidance/latest/getting-started-terraform/functions-expressions.html) 3. [new Lambda Function URL with Terraform | AWS re:Post](https://repost.aws/questions/QUASho7GhhStyWZx1H93weCQ/new-lambda-function-url-with-terraform) --- ### Prompt 10 Can you show me how I would extract the enum classes to their own file so I can reference them from both the scraper.py and retriever.py files? **Response:** Sure, here's how you can extract the enum classes to their own file and reference them from both `scraper.py` and `retriever.py`: 1. Create a new file called `enums.py` in the same directory as `scraper.py` and `retriever.py`. 2. In `enums.py`, define your enum classes: ```python # enums.py from enum import Enum class FilingType(Enum): FINAL_DATE = 'Final Date' INITIAL_DATE = 'Initial Date' # Add other enum members as needed class Country(Enum): ALL_AREAS = 'All Areas' CHINA_MAINLAND_BORN = 'China Mainland-born' # Add other enum members as needed class Category(Enum): FIRST = 'First' SECOND = 'Second' THIRD = 'Third' # Add other enum members as needed ``` 3. In `scraper.py` and `retriever.py`, import the enum classes from `enums.py`: ```python # scraper.py from enums import FilingType, Country, Category # Your scraper code here # retriever.py from enums import FilingType, Country, Category # Your retriever code here ``` Now, you can use the enum classes `FilingType`, `Country`, and `Category` in both `scraper.py` and `retriever.py`. For example, in `retriever.py`, you can use the enum classes like this: ```python # retriever.py from enums import FilingType, Country, Category def lambda_handler(event, context): query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', FilingType.FINAL_DATE.name) filing_type = FilingType[filing_type_str.upper()] country_str = query_params.get('country', Country.ALL_AREAS.name) country = Country[country_str.upper()] category_str = query_params.get('category', Category.THIRD.name) category = Category[category_str.upper()] # Your retrieval logic here ``` By extracting the enum classes to a separate file (`enums.py`), you can easily import and use them in both `scraper.py` and `retriever.py`. This approach promotes code reusability and maintainability, as you only need to define the enum classes in one place, and any changes to the enum classes will be reflected in both files. --- ### Prompt 11 Can you please show me how I would update the line `table = dynamodb.Table('VisaBulletinData')` to use an environment variable called "BULLETIN_DATA" instead of the hard-coded string, and only use that hard-coded string as a fallback if the environment variable is not set? **Response:** Sure, here's how you can update the line `table = dynamodb.Table('VisaBulletinData')` to use an environment variable called `BULLETIN_DATA` and fallback to the hard-coded string if the environment variable is not set: ```python import os table_name = os.environ.get('BULLETIN_DATA', 'VisaBulletinData') table = dynamodb.Table(table_name) ``` Here's what's happening: 1. `import os` imports the `os` module, which provides a way to interact with the operating system, including accessing environment variables. 2. `os.environ.get('BULLETIN_DATA', 'VisaBulletinData')` retrieves the value of the environment variable `BULLETIN_DATA`. If the environment variable is not set, it will use the fallback value `'VisaBulletinData'`. 3. The retrieved value (either from the environment variable or the fallback value) is assigned to the `table_name` variable. 4. `dynamodb.Table(table_name)` creates a DynamoDB Table object using the value stored in `table_name`. [1] By using `os.environ.get()`, you can easily switch between using the environment variable or the hard-coded string, depending on whether the environment variable is set or not. To set the `BULLETIN_DATA` environment variable when running your Lambda function, you can configure it in the AWS Lambda console or through your Terraform configuration. Here's an example of how you can set the environment variable using Terraform: ```hcl resource "aws_lambda_function" "your_lambda_function" { # ... other configuration ... environment { variables = { BULLETIN_DATA = "YourTableName" } } } ``` Replace `"YourTableName"` with the desired name of your DynamoDB table. With this configuration, your Lambda function will use the value of the `BULLETIN_DATA` environment variable as the DynamoDB table name. If the environment variable is not set (or set to an empty string), it will fallback to using the hard-coded string `'VisaBulletinData'`. Sources: 1. [aws-lambda-dynamodb - AWS Solutions Constructs](https://docs.aws.amazon.com/solutions/latest/constructs/aws-lambda-dynamodb.html) --- ### Prompt 12 Can you please show me how I would call the `visa_bulletin_retriever` Lambda function using the AWS CLI on my local terminal with an event payload of: ```json { 'queryStringParameters': { 'filing_type': 'FINAL_DATE', 'category': 'THIRD', 'country': 'ALL_AREAS' } } ``` **Response:** Sure, here's how you can invoke the `visa_bulletin_retriever` Lambda function using the AWS CLI on your local terminal with the provided event payload: ```bash aws lambda invoke \ --function-name visa_bulletin_retriever \ --cli-binary-format raw-in-base64-out \ --payload '{"queryStringParameters": {"filing_type": "FINAL_DATE", "category": "THIRD", "country": "ALL_AREAS"}}' \ response.json ``` Let's break down this command: 1. `aws lambda invoke`: This is the AWS CLI command to invoke a Lambda function. 2. `--function-name visa_bulletin_retriever`: Specifies the name of the Lambda function you want to invoke. Replace `visa_bulletin_retriever` with the actual name of your Lambda function. 3. `--cli-binary-format raw-in-base64-out`: This option tells the AWS CLI to send the payload as raw binary data and to decode the response from Base64. 4. `--payload '{"queryStringParameters": {"filing_type": "FINAL_DATE", "category": "THIRD", "country": "ALL_AREAS"}}'`: This is the JSON payload that you want to pass to your Lambda function. It matches the structure you provided in your example. 5. `response.json`: This is the name of the file where the Lambda function's response will be saved. You can choose any file name you prefer. After running this command, the Lambda function will be invoked with the provided payload, and the response will be saved in the `response.json` file. Make sure you have the AWS CLI configured with the appropriate credentials and permissions to invoke the Lambda function. If you want to see the response in the terminal instead of saving it to a file, you can remove the `response.json` part from the command: ```bash aws lambda invoke \ --function-name visa_bulletin_retriever \ --cli-binary-format raw-in-base64-out \ --payload '{"queryStringParameters": {"filing_type": "FINAL_DATE", "category": "THIRD", "country": "ALL_AREAS"}}' ``` This will print the Lambda function's response directly in the terminal. Note: If you need to pass more complex payloads or payloads from a file, you can use the `--payload-json file://path/to/payload.json` option instead of embedding the JSON payload directly in the command. --- ### Prompt 13 Can you please show me how to update the return method for `lambda_handler` in `retriever.py` so that the data is in the following shape please? ```json { 'filing_type': 'Final Date', 'category': '3rd', 'country': 'All Chargeability Areas Except Those Listed', 'data': [ {'bulletin_date': '2024-07-01', 'date': '2021-12-01' }, {'bulletin_date': '2024-08-01', 'date': '2021-12-05' } ] } ``` **Response:** To update the return method for `lambda_handler` in `retriever.py` to return the data in the desired shape, you can modify the code as follows: ```python import json def lambda_handler(event, context): query_params = event.get('queryStringParameters', {}) filing_type_str = query_params.get('filing_type', FilingType.FINAL_DATE.name) filing_type = FilingType[filing_type_str.upper()].value country_str = query_params.get('country', Country.ALL_AREAS.name) country = Country[country_str.upper()].value category_str = query_params.get('category', Category.THIRD.name) category = Category[category_str.upper()].value data = read_data(filing_type, category, country) response = { 'filing_type': filing_type, 'category': category, 'country': country, 'data': data } return { 'statusCode': 200, 'body': json.dumps(response) } ``` Here's what's happening: 1. We import the `json` module to convert the Python dictionary to a JSON string. 2. Inside the `lambda_handler` function, we retrieve the `filing_type`, `country`, and `category` values from the `query_params` dictionary, using the corresponding enum values as defaults. 3. Instead of using the string names of the enum members, we use the `value` attribute to get the actual string values associated with each enum member (e.g., `FilingType.FINAL_DATE.value` returns `'Final Date'`). 4. We call the `read_data` function to get the data based on the `filing_type`, `category`, and `country` values. 5. We create a new dictionary `response` with the desired shape, including the `filing_type`, `category`, `country`, and `data` keys. 6. We return a dictionary with a `statusCode` of 200 and a `body` containing the JSON-encoded `response` dictionary. With this modification, the Lambda function will return a response in the desired shape: ```json { "statusCode": 200, "body": "{\"filing_type\": \"Final Date\", \"category\": \"3rd\", \"country\": \"All Chargeability Areas Except Those Listed\", \"data\": [{\"bulletin_date\": \"2024-07-01\", \"date\": \"2021-12-01\"}, {\"bulletin_date\": \"2024-08-01\", \"date\": \"2021-12-05\"}]}" } ``` Note that the `body` is a JSON-encoded string, so you may need to parse it on the client-side (e.g., using `JSON.parse()` in JavaScript) to work with the data as a JavaScript object.
cobusbernard
1,910,936
Fun and Fantastic Algorithms Every Developer Should Know
Algorithms are the magic spells of the software world, turning complex problems into manageable tasks...
0
2024-07-04T00:14:01
https://devtoys.io/2024/07/03/fun-and-fantastic-algorithms-every-developer-should-know/
devtoys, algorithms, coding, programming
--- canonical_url: https://devtoys.io/2024/07/03/fun-and-fantastic-algorithms-every-developer-should-know/ --- Algorithms are the magic spells of the software world, turning complex problems into manageable tasks and making our code work like a charm. Whether you're sorting through data, searching for the needle in the haystack, or finding the shortest path in a maze, there's an algorithm to help you out. We'll dive into some fun and fantastic algorithms that every developer should know and love. --- ## 1. Sorting Algorithms **Bubble Sort** Bubble Sort is the classic "bubble up" game. Imagine bubbles rising to the surface in a glass of soda, with each bubble representing a data element. It’s a simple algorithm that compares and swaps adjacent elements until the entire list is sorted. ```python def bubble_sort(arr): n = len(arr) for i in range(n): for j in range(0, n-i-1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] return arr ``` **Quick Sort** Quick Sort is the divide-and-conquer superhero of sorting. It picks a pivot and splits the array into smaller parts, sorting them independently before merging them back together. It’s like organizing your bookshelf by first separating books into genres and then sorting each genre individually. ```python def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ``` --- ## 2. Search Algorithms **Linear Search** Linear Search is like flipping through the pages of a book to find a specific word. It’s straightforward and checks each element until it finds the target. Perfect for small datasets or when you need a simple solution. ```python def linear_search(arr, target): for i in range(len(arr)): if arr[i] == target: return i return -1 ``` **Binary Search** Binary Search is the librarian's secret weapon. With a sorted list, it splits the search range in half, narrowing down the possibilities until it finds the target. It’s like playing a game of "hot and cold" with precision. ```python def binary_search(arr, target): low, high = 0, len(arr) - 1 while low <= high: mid = (low + high) // 2 if arr[mid] == target: return mid elif arr[mid] < target: low = mid + 1 else: high = mid - 1 return -1 ``` --- ## [This if an amazing book to explore for anyone who wants a deeper dive into the world of algorithms! πŸ€“](https://amzn.to/3VSYfXu) [![advanced algorithms](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/786juhfxdb5su84fcyod.jpg)](https://amzn.to/3VSYfXu) ## [Advanced Algorithms and Data Structures](https://amzn.to/3VSYfXu) --- ## 3. Graph Algorithms **Dijkstra's Algorithm** Dijkstra's Algorithm is the ultimate GPS navigator. It finds the shortest path in a graph, making it ideal for routing and network optimization. Think of it as plotting the quickest route from your home to your favorite ice cream parlor. ```python import heapq def dijkstra(graph, start): queue = [] heapq.heappush(queue, (0, start)) distances = {node: float('infinity') for node in graph} distances[start] = 0 while queue: current_distance, current_node = heapq.heappop(queue) if current_distance > distances[current_node]: continue for neighbor, weight in graph[current_node].items(): distance = current_distance + weight if distance < distances[neighbor]: distances[neighbor] = distance heapq.heappush(queue, (distance, neighbor)) return distances ``` ## 4. Dynamic Programming Algorithms **Fibonacci Sequence** The Fibonacci sequence is nature's coding puzzle. Using dynamic programming, we can generate the sequence efficiently, like filling in the missing pieces of a beautiful mosaic. ```python def fibonacci(n): if n <= 1: return n fib = [0, 1] for i in range(2, n+1): fib.append(fib[i-1] + fib[i-2]) return fib[n] ``` --- ## 5. Machine Learning Algorithms **K-Nearest Neighbors (KNN)** KNN is like finding your clique in a high school cafeteria. It classifies data points based on the "neighborhood" they belong to, making it a powerful tool for both classification and regression tasks. ```python import numpy as np from collections import Counter def knn_predict(data, query, k): distances = [] for group in data: for features in data[group]: euclidean_distance = np.linalg.norm(np.array(features) - np.array(query)) distances.append((euclidean_distance, group)) votes = [i[1] for i in sorted(distances)[:k]] vote_result = Counter(votes).most_common(1)[0][0] return vote_result ``` --- ## Conclusion These fun and fantastic algorithms are the secret ingredients to making your software efficient, reliable, and powerful. From sorting and searching to graph navigation and dynamic programming, mastering these algorithms will enhance your problem-solving skills and open up new possibilities in your coding journey. Explore more exciting tutorials and resources on DevToys.io, your playground for all things software development. Happy coding! --- ## ❀️ If you enjoyed this article, come check out our hacker community for more! Sign up for our newsletter to stay ahead of the curve! [DevToys.io](http://devtoys.io) πŸ‘½
3a5abi
1,910,937
S
A post by Rico Crasela Official
0
2024-07-04T00:11:00
https://dev.to/rico_craselaofficial_2dd/s-351p
rico_craselaofficial_2dd
1,909,824
Next.js Deployment: Vercel's Charm vs. GCP's Muscle
Have been exploring Next.js and hoping this post encourages thoughtful discussions. In this...
0
2024-07-03T23:59:04
https://dev.to/solitary-polymath/nextjs-deployment-vercels-charm-vs-gcps-muscle-l0
nextjs, googlecloud, vercel, webdev
> Have been exploring [Next.js](https://nextjs.org/) and hoping this post encourages thoughtful discussions. > In this exploration, I have used both Vercel and Google Cloud Platform (GCP) for hosting Next.js application. I have gathered insights and experiences that I would like to share, focusing on the strengths and considerations of using Vercel ~~and~~ OR GCP for hosting Next.js applications. <br> ... ## Choosing Between Vercel and Google Cloud Platform (GCP) for Next.js Deployment<br> For developers building ~~modern~~ web applications, Next.js remains a dominant framework. When considering deployment options, developers often face the choice between platforms 🎲 Vercel, GCP, AWS, Azure, and others. These platforms offer diverse solutions for different project needs and technical requirements. Front-end developers, particularly those in the React.js ~~!deprecated~~ and Next.js ecosystem, often lean towards Vercel due to its seamless integration and ease of use. Deploying applications on Vercel requires ~~0~~ minimal effort and aligns well with the rapid development cycles of front-end projects. In contrast, platforms like GCP, AWS, Azure, and others provide a broader range of services that may require a deeper understanding of cloud infrastructure and configuration management. While this article focuses exclusively on Vercel and GCP, it's crucial for ~~copilot~~ developers to understand the landscape of available options. By exploring the strengths and considerations of different platforms, you can make informed decisions that best suit your project requirements and ~~AI.prompt~~ technical expertise. Whether you're considering Vercel's simplicity or exploring the extensive capabilities of GCP, this article provides insights to help you navigate the complexities of choosing the right hosting platform for ~~ChatGPT's~~ your Next.js application. ![Rocket launched in the sky emoji](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcv6avtkvkznm2q6j7ot.png) ## Vercel: The Streamlined ~~Champion~~<br> Vercel provides seamless integration and out-of-the-box support for all Next.js features. Deployment remains a breeze with automatic build and deployment upon pushing to your Git repository. Built-in serverless functions enable the creation of scalable backends alongside your Next.js application. Vercel's global content delivery network ([CDN](https://vercel.com/docs/concepts/edge-network/overview)) ensures ~~lightning-~~ fast load times for your users. ### Edge Middleware: [➚](https://vercel.com/docs/functions/edge-middleware) Experiment with features like A/B testing and feature flags by routing requests to different Next.js application versions using [Edge Middleware](https://vercel.com/docs/edge-network/frequently-asked-questions).<br> ### Focus on Developer Experience: Vercel offers a smooth development workflow. Improved debugging tools and serverless function logs further streamline the development process.<br> ### Ideal For: **MVPs and Prototypes** Vercel is a good option for validating ideas and getting projects off the ground quickly.<br> **Small to Medium-Sized Businesses**: With limited technical resources, businesses can leverage Vercel's ease of use for a hassle-free hosting experience.<br> **Cost-Conscious Projects**: The generous free tier makes Vercel a good option for projects with budget constraints.<br> ### Keep in Mind **Limited Customization**: While Vercel excels in ease of use, it may not offer the same level of granular control as GCP for experienced developers who might prefer managing configurations themselves.<br> **Pricing Considerations**: Vercel's pricing structure can become expensive as traffic scales or serverless functions become complex. [Vercel Pricing](https://vercel.com/pricing). ![UFO emoji](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d0u8nfma6jy4gqh3gq5.png) ## GCP: The Powerhouse for Scalability<br> GCP offers a comprehensive suite of cloud services, including [Compute Engine](https://cloud.google.com/compute), [App Engine](https://cloud.google.com/appengine), and [Cloud Run](https://cloud.google.com/run). This translates to unparalleled control over your infrastructure and deployment configurations. Designed for large-scale applications, GCP effortlessly scales to accommodate significant traffic growth. Additionally, for projects heavily reliant on Google services like [BigQuery](https://cloud.google.com/bigquery), [Cloud Storage](https://cloud.google.com/storage), or [AI/ML tools](https://cloud.google.com/products/ai), GCP's tight integration creates a streamlined development experience for data-driven applications.<br> ### Cloud Run Advancements: Autoscaling based on container CPU usage, allowing for efficient resource allocation and improved integration with [Cloud Monitoring](https://cloud.google.com/monitoring) for better performance insights.<br> ### Ideal For: **Enterprise-Grade Applications**: GCP's robust infrastructure empowers you to handle the demands of large-scale applications with high traffic volumes.<br> **Data-Driven Projects**: Seamless integration with Google's data and AI/ML services (like [Cloud Functions](https://cloud.google.com/functions) and [Cloud SQL](https://cloud.google.com/sql)) streamlines development workflows for data-driven applications.<br><br> **Customization Enthusiasts**: GCP empowers experienced developers with granular control over their infrastructure and deployment configurations.<br> ### Keep in Mind: **Steeper Learning Curve**: Setting up and managing deployments on GCP requires a deeper understanding of cloud infrastructure compared to Vercel.<br> **Support Considerations**: Vercel offers dedicated developer support which can be invaluable for teams requiring more hand-holding. GCP requires more self-reliance.<br> ## Conclusion Both Vercel and GCP are excellent choices for deploying Next.js applications, but the ideal platform depends on your project's specific needs. Vercel shines with its developer-friendly experience, perfect for MVPs, small businesses, and cost-conscious projects. GCP caters to enterprise-grade applications, data-driven development, and customization enthusiasts, but requires a steeper learning curve and more self-reliance. > By understanding the strengths and limitations of each platform, you can make an informed decision that empowers your Next.js application to thrive. **Additionally**, explore [AWS](https://aws.amazon.com/), [DigitalOcean](https://www.digitalocean.com/), [Azure](https://azure.microsoft.com/), and [IBM Cloud](https://www.ibm.com/cloud) for more options. <br><br><br> ![Owl emoji](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzsjduw0ej79dx5jdioq.png)
solitary-polymath
1,910,934
New Social Media Project!
Looking for experienced Full Stack, Python, and UI/UX programmers for a social media project! Let me...
0
2024-07-03T23:56:59
https://dev.to/chris_wilson_22/new-social-media-project-e33
python, aws, design, django
Looking for experienced Full Stack, Python, and UI/UX programmers for a social media project! Let me know if you are interested!
chris_wilson_22
1,910,929
AutomatizaciΓ³n de la creaciΓ³n de servidores Node.js con una API central usando Nuxt 3
En el escenario actual de desarrollo de software, donde la agilidad y la eficiencia son cruciales, la...
0
2024-07-03T23:46:53
https://dev.to/wgbn/automatizacion-de-la-creacion-de-servidores-nodejs-con-una-api-central-usando-nuxt-3-39ko
api, node, nuxt, nginx
En el escenario actual de desarrollo de software, donde la agilidad y la eficiencia son cruciales, la automatizaciΓ³n de procesos repetitivos se vuelve no sΓ³lo deseable, sino esencial. Recientemente me enfrentΓ© a un desafΓ­o comΓΊn entre los desarrolladores: la necesidad de configurar e implementar mΓΊltiples servidores Node.js de manera rΓ‘pida y consistente. Para resolver este problema, desarrollΓ© una soluciΓ³n utilizando una API central construida con Nuxt 3, que automatiza todo el proceso de creaciΓ³n y configuraciΓ³n de servidores Node.js. Este enfoque no sΓ³lo simplifica significativamente el proceso de implementaciΓ³n, sino que tambiΓ©n reduce drΓ‘sticamente el tiempo dedicado a tareas manuales y minimiza la posibilidad de error humano. ### El desafΓ­o en detalle Como desarrollador full-stack, a menudo me enfrentaba a la tarea repetitiva y propensa a errores de configurar manualmente nuevos servidores Node.js. Este proceso implicΓ³ una serie de pasos meticulosos: 1. **CreaciΓ³n de Repositorios Git para implementaciΓ³n**: Configure repositorios Git desnudos, utilizados en servidores para actualizar el cΓ³digo en producciΓ³n como parte del proceso de implementaciΓ³n, para cada nuevo proyecto, facilitando el proceso de implementaciΓ³n. 2. **ConfiguraciΓ³n de Git Hooks**: implemente enlaces personalizados para automatizar tareas posteriores a la recepciΓ³n, como compilar cΓ³digo y reiniciar servicios. 3. **GestiΓ³n de procesos con PM2**: agregue y configure nuevas aplicaciones en PM2, un administrador de procesos sΓ³lido para aplicaciones Node.js, que garantiza que los servicios permanezcan activos y se reinicien automΓ‘ticamente en caso de fallas. 4. **ConfiguraciΓ³n de Nginx**: cree y active configuraciones de Nginx para cada nuevo servicio, estableciendo un proxy inverso eficiente y administrando el enrutamiento del trΓ‘fico. 5. **Reinicio de servicios**: asegΓΊrese de que todos los servicios afectados, especialmente Nginx, se hayan reiniciado correctamente para aplicar la nueva configuraciΓ³n. Cada una de estas tareas requerΓ­a acceso SSH al servidor y la ejecuciΓ³n de una serie de comandos especΓ­ficos. Esto no solo consumiΓ³ un tiempo precioso, sino que tambiΓ©n aumentΓ³ significativamente las posibilidades de que se produjeran errores de configuraciΓ³n, lo que podrΓ­a provocar problemas de implementaciΓ³n o, peor aΓΊn, vulnerabilidades de seguridad. ### La soluciΓ³n: una API de automatizaciΓ³n central con Nuxt 3 Para superar estos desafΓ­os, desarrollΓ© una API central robusta y flexible utilizando el marco Nuxt 3. La elecciΓ³n de Nuxt 3 fue estratΓ©gica, ya que era un requisito reciente para usar en la empresa para la que trabajo, ademΓ‘s de su capacidad para crear aplicaciones eficientes. API a travΓ©s de H3, un marco HTTP ligero y rΓ‘pido. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yfxucv7vol3lvz54dns.jpg) Nuxt 3 ofrece varias ventajas que lo hacen ideal para este tipo de proyectos: 1. **Marco moderno**: Nuxt 3 estΓ‘ construido con TypeScript y admite de forma nativa ESM (mΓ³dulos ECMAScript), lo que proporciona un entorno de desarrollo moderno y escrito. 2. **Rendimiento**: Con su sistema de compilaciΓ³n optimizado y soporte de renderizado del lado del servidor (SSR), Nuxt 3 ofrece un rendimiento excelente. 3. **Rutas API**: Nuxt 3 simplifica la creaciΓ³n de API RESTful a travΓ©s de su sistema de rutas API, que utiliza H3 internamente. 4. **Ecosistema**: la profunda integraciΓ³n con el ecosistema Vue.js le permite aprovechar una amplia gama de complementos y mΓ³dulos. ### H3: El corazΓ³n de la API MenciΓ³n especial merece H3, el framework HTTP utilizado por Nuxt 3 para sus rutas API. A diferencia del Express, el H3 estΓ‘ diseΓ±ado para ser extremadamente liviano y eficiente y ofrece: - **Baja sobrecarga**: El H3 tiene un diseΓ±o minimalista, lo que reduce el consumo de memoria y mejora los tiempos de arranque. - **Compatibilidad universal**: funciona en diferentes entornos, incluidos los sin servidor, los trabajadores y los Node.js tradicionales. - **API moderna**: utiliza promesas y async/await de forma nativa, lo que simplifica el manejo de operaciones asincrΓ³nicas. ## ImplementaciΓ³n detallada La implementaciΓ³n de la API principal se realizΓ³ utilizando Nuxt 3, aprovechando sus capacidades de rutas API y la eficiencia de H3. Exploremos algunos componentes clave de la implementaciΓ³n: ### Estructura del proyecto ``` raΓ­z del proyecto/ β”œβ”€β”€servidor/ β”‚ β”œβ”€β”€ API/ β”‚ β”‚ β”œβ”€β”€nginx/ | | | β”œβ”€β”€ activar.post.ts | | | β”œβ”€β”€ recargar.get.ts | | | └── sitios.post.ts β”‚ β”‚ β”œβ”€β”€ pm2/ | | | └── aplicaciones.post.ts β”‚ β”‚ └── repositorios/ | | β”œβ”€β”€ ganchos.post.ts | | └── index.post.ts | β”œβ”€β”€ middleware/ | | └── autentificaciones | β”œβ”€β”€ complementos/ | | └── inicios β”‚ └── utilidades/ | └── execCommand.ts β”œβ”€β”€ nuxt.config.ts └── paquete.json ``` El objetivo de este artΓ­culo no es detallar la implementaciΓ³n de cada endpoint, middleware o complemento, sino presentar la idea general y algunas soluciones clave de implementaciΓ³n. Queremos provocar al desarrollador que lo lea para que complemente el proyecto con sus propias ideas. AquΓ­ sΓ³lo abordaremos los extractos que considerΓ© mΓ‘s interesantes y relevantes para concretar. ### EjecuciΓ³n del comando Shell Un componente crucial de la implementaciΓ³n es la funciΓ³n `execShellCommand`, que permite la ejecuciΓ³n segura de comandos de shell. Esta funciΓ³n se ha implementado en `server/utils/execCommand.ts`: ```typescript importar {exec} desde 'child_process' exportar funciΓ³n predeterminada execShellCommand (cmd: cadena) { devolver nueva Promesa((resolver, rechazar) => { child_process.exec(cmd, (error, salida estΓ‘ndar, stderr) => { si (error) rechazar (stderr) de lo contrario resolver (salida estΓ‘ndar) }) }) } ``` ### ImplementaciΓ³n de puntos finales Veamos la implementaciΓ³n del punto final para agregar aplicaciones a PM2, ubicado en `server/api/apps.post.ts`: ```typescript importar execShellCommand desde '~/server/utils/execCommand' exportar por defecto defineEventHandler(async (evento: cualquiera) => { console.log('[POST] /api/pm2/apps') cuerpo constante = esperar cuerpo de lectura (evento) if (!body || !body.appName || !body.appScript || !body.appPath) { establecerEstadoRespuesta(evento, 400) return {Γ©xito: falso, error: 'parΓ‘metros requeridos'. } } intentar { // 1. Construye el comando PM2 let pm2Command = `pm2 start ${body.appScript} --name ${body.appName}` si (body.appPath) pm2Command += ` --cwd ${body.appPath}` // 2. Ejecutar el comando PM2 espere execShellCommand (pm2Command) return {Γ©xito: verdadero, mensaje: `Β‘AplicaciΓ³n '${body.appName}' agregada a PM2 exitosamente!` } } captura (error: cualquiera) { consola.log(error.mensaje) establecerEstadoRespuesta(evento, 500) return {Γ©xito: falso, error: 'Error al agregar la aplicaciΓ³n a PM2.' } } }) ``` En este ejemplo, podemos ver cΓ³mo H3 simplifica el manejo de solicitudes y respuestas a travΓ©s de `defineEventHandler`. La funciΓ³n `readBody` se utiliza para extraer y validar datos de solicitud de forma asincrΓ³nica. ### ConfiguraciΓ³n de Nginx El punto final para crear y activar configuraciones de Nginx demuestra cΓ³mo manejar las operaciones del sistema de archivos y ejecutar comandos de shell en secuencia: ```typescript importar * como fs desde 'fs' exportar por defecto defineEventHandler(async (evento: cualquiera) => { console.log('[POST] /api/nginx/sites') cuerpo constante = esperar cuerpo de lectura (evento) if (!body || !body.siteName || !body.siteConfig) { establecerEstadoRespuesta(evento, 400) return {Γ©xito: falso, error: 'parΓ‘metros requeridos'. } } const disponibleSitesPath = '/etc/nginx/sitios-disponibles' const newSiteFilePath = `${availableSitesPath}/${body.siteName}.conf` intentar { // 1. Comprobar si el sitio web ya existe const siteExists = espera fs.promises.access(newSiteFilePath, fs.constants.F_OK) .entonces(() => verdadero) .catch(() => falso) si (el sitio existe) { establecerEstadoRespuesta(evento, 409) return {Γ©xito: falso, error: `Ya existe una configuraciΓ³n para el sitio '${body.siteName}'.` } } // 2. Escribir la configuraciΓ³n del sitio en un archivo espere fs.promises.writeFile (newSiteFilePath, body.siteConfig) return {Γ©xito: verdadero, mensaje: `Β‘La configuraciΓ³n del sitio '${body.siteName}' se creΓ³ correctamente!` } } captura (error: cualquiera) { consola.log(error.mensaje) establecerEstadoRespuesta(evento, 500) return {error: 'Error al crear la configuraciΓ³n del sitio'. } } }) ``` Este punto final demuestra cΓ³mo Nuxt 3 y H3 permiten una integraciΓ³n fluida entre las operaciones del sistema de archivos asincrΓ³nicos y la ejecuciΓ³n de comandos de shell, todo dentro de un ΓΊnico controlador de eventos. ## Consideraciones de seguridad detalladas Al desarrollar una API con tal nivel de control sobre el servidor, la seguridad se convierte en una preocupaciΓ³n principal. Exploremos algunas medidas de seguridad esenciales en detalle: 1. **AutenticaciΓ³n y autorizaciΓ³n robustas**: - Implementar un sistema de autenticaciΓ³n JWT (JSON Web Tokens) para todas las rutas API. - Utilice middleware de autorizaciΓ³n para verificar permisos especΓ­ficos para cada punto final. - Considere implementar un sistema de roles para un control de acceso mΓ‘s granular. 2. **ValidaciΓ³n de entrada estricta**: - Utilice bibliotecas como `zod` o `joi` para la validaciΓ³n del esquema de los datos de entrada. - Desinfectar todas las entradas para evitar la inyecciΓ³n de comandos y ataques XSS. - Implementar limitaciΓ³n de velocidad para evitar ataques de fuerza bruta. 3. **Principio de privilegio mΓ­nimo**: - Crear un usuario dedicado en el sistema operativo con permisos estrictamente necesarios. - Utilice `sudo` con comandos especΓ­ficos en lugar de dar acceso completo a la raΓ­z. - Implementar un sistema de lista blanca para comandos permitidos. 4. **Monitoreo y AuditorΓ­a**: - Implementar un registro detallado de todas las acciones realizadas por la API. - Utilice un servicio de seguimiento como Datadog o New Relic para recibir alertas en tiempo real. - Realizar auditorΓ­as periΓ³dicas de logs y configuraciones de seguridad. 5. **HTTPS y seguridad de red**: - AsegΓΊrese de que toda la comunicaciΓ³n con la API se realice a travΓ©s de HTTPS. - Implementar CORS (Cross-Origin Resource Sharing) de forma restrictiva. - Considere utilizar una VPN para acceder a API en entornos de producciΓ³n. 6. **GestiΓ³n segura de secretos**: - Utilice variables de entorno o un servicio de gestiΓ³n de secretos como AWS Secrets Manager o HashiCorp Vault. - Nunca almacene contraseΓ±as o claves directamente en cΓ³digo o en archivos de configuraciΓ³n versionados. 7. **Actualizaciones y parches**: - Mantenga todos los paquetes y dependencias actualizados periΓ³dicamente. - Implementar un proceso CI/CD que incluya controles de seguridad automΓ‘ticos. ## ConclusiΓ³n y Reflexiones Finales ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eldnq9tp391fotfipvip.jpg) La implementaciΓ³n de esta API de automatizaciΓ³n central utilizando Nuxt 3 y H3 ha transformado significativamente el flujo de trabajo de implementaciΓ³n de mi servidor Node.js. Las tareas que anteriormente requerΓ­an acceso SSH manual y la ejecuciΓ³n de mΓΊltiples comandos ahora se pueden realizar con una simple llamada API, lo que reduce drΓ‘sticamente el tiempo de configuraciΓ³n y minimiza el error humano. La elecciΓ³n de Nuxt 3 como marco para esta soluciΓ³n resultΓ³ ser la correcta, ya que ofrece un equilibrio ideal entre rendimiento, facilidad de desarrollo y flexibilidad. La integraciΓ³n nativa con H3 para rutas API proporcionΓ³ una base sΓ³lida y eficiente para construir los puntos finales necesarios. Sin embargo, es fundamental resaltar que una API con este nivel de control sobre el servidor representa a la vez una herramienta poderosa y una responsabilidad importante. Implementar medidas de seguridad sΓ³lidas no sΓ³lo es recomendable, sino absolutamente esencial. Cada punto final debe tratarse como un vector de ataque potencial y la seguridad debe ser una consideraciΓ³n primordial en cada etapa del desarrollo y operaciΓ³n de la API. De cara al futuro, veo varias posibilidades para ampliar y mejorar esta soluciΓ³n: 1. **IntegraciΓ³n con sistemas de orquestaciΓ³n**: considere la integraciΓ³n con herramientas como Kubernetes o Docker Swarm para la gestiΓ³n de contenedores a gran escala. 2. **ImplementaciΓ³n de webhooks**: agregue compatibilidad con webhooks para notificar a sistemas externos sobre eventos importantes, como la creaciΓ³n exitosa de un nuevo servidor. 3. **Interfaz de usuario**: desarrolle una interfaz de usuario amigable utilizando Vue.js para complementar la API, facilitando aΓΊn mΓ‘s la administraciΓ³n del servidor. 4. **ExpansiΓ³n a otros servicios**: amplΓ­e la funcionalidad para cubrir otros servicios mΓ‘s allΓ‘ de Node.js, como bases de datos o servidores de cachΓ©. En conclusiΓ³n, esta soluciΓ³n no sΓ³lo optimizΓ³ mi proceso de trabajo, sino que tambiΓ©n abriΓ³ nuevas posibilidades de automatizaciΓ³n y gestiΓ³n de infraestructura. Con las precauciones de seguridad adecuadas, creo que enfoques similares pueden beneficiar significativamente a los equipos de desarrollo y operaciones, promoviendo una cultura DevOps mΓ‘s eficiente y Γ‘gil.
wgbn
1,910,928
Automate User and Group Management with Bash: A Comprehensive Guide
Introduction Managing user accounts and groups is a crucial task for SysOps engineers,...
0
2024-07-03T23:42:35
https://dev.to/vivienne/automate-user-and-group-management-with-bash-a-comprehensive-guide-3h5d
### Introduction Managing user accounts and groups is a crucial task for SysOps engineers, especially in an environment with many new developers. This article introduces a Bash script designed to automate these tasks, ensuring efficiency, consistency, and security. The script reads from a text file, creates users and groups as specified, sets up home directories, generates random passwords, logs actions, and stores passwords securely. ### Why Automate User and Group Management? Automation in user management offers several advantages: - **Efficiency:** Reduces the time spent on repetitive tasks. - **Consistency:** Ensures uniformity in user setup across the organization. - **Security:** Automatically generates secure passwords and sets appropriate permissions. - **Auditability:** Maintains a detailed log of actions for accountability. ### The Bash Script: create_users.sh Here's a step-by-step breakdown of the script: ```bash #!/bin/bash # Check if the input file exists if [ ! -f "$1" ]; then echo "Error: Input file not found." exit 1 fi # Define log and password file locations LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Initialize log file if it doesn't exist if [ ! -f "$LOG_FILE" ]; then sudo touch "$LOG_FILE" sudo chown root:root "$LOG_FILE" sudo chmod 600 "$LOG_FILE" fi # Initialize password file if it doesn't exist if [ ! -f "$PASSWORD_FILE" ]; then sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chown root:root "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" fi # Redirect stdout and stderr to the log file exec > >(sudo tee -a "$LOG_FILE") 2>&1 # Function to check if a user exists user_exists() { id "$1" &>/dev/null } # Read each line from the input file while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Check if the user already exists if user_exists "$username"; then echo "User $username already exists." continue fi # Create user sudo useradd -m "$username" # Create personal group (same as username) sudo groupadd "$username" # Add user to personal group sudo usermod -aG "$username" "$username" # Create home directory sudo mkdir -p "/home/$username" sudo chown "$username:$username" "/home/$username" # Generate random password password=$(openssl rand -base64 12) # Set password for user echo "$username:$password" | sudo chpasswd # Log actions echo "User $username created. Password: $password" # Store passwords securely echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" # Add user to specified groups if [ -n "$groups" ]; then IFS=',' read -ra group_list <<< "$groups" for group in "${group_list[@]}"; do sudo usermod -aG "$group" "$username" echo "Added $username to group $group" done fi done < "$1" ``` ### How to Use the Script 1. **Prepare the Input File:** Create a text file where each line follows the format `user;groups`, with usernames separated from their groups by a semicolon and groups separated by commas. Example: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` 2. **Run the Script:** Execute the script with the input file as an argument: ```bash bash create_users.sh <name-of-text-file> ``` 3. **Verify the Output:** - **Log File:** Check `/var/log/user_management.log` for a detailed log of actions performed. - **Password File:** Passwords are securely stored in `/var/secure/user_passwords.csv`. ### Benefits of the Script - **Automation:** Saves time and reduces human error by automating user creation. - **Security:** Generates secure passwords and sets appropriate permissions. - **Logging:** Provides a detailed log for auditing and troubleshooting. ### Learn More with HNG Internship Interested in enhancing your skills and working on real-world projects? Check out the [HNG Internship](https://hng.tech/internship) program to learn from industry experts and gain valuable experience. You can also explore opportunities to [hire top talent](https://hng.tech/hire) from the HNG community. ### Conclusion Automating user management with a Bash script can significantly improve the efficiency and security of your IT operations. This script provides a robust solution for managing user accounts, ensuring a consistent and secure setup for new developers. By leveraging automation, you can focus on more strategic tasks, confident that new developers are onboarded efficiently and securely. For the complete script and further details, visit the [GitHub repository](https://github.com/your-repo/create_users). Feel free to leave comments or questions below, and happy automating!
vivienne
1,901,343
State Management in React Native: Redux, Context API, MobX, and Zustand
Hey devs! Managing state in React Native applications is crucial for ensuring a consistent and...
0
2024-07-03T23:41:51
https://dev.to/paulocappa/state-management-in-react-native-redux-context-api-mobx-and-zustand-2k6l
reactnative, redux, mobile, react
Hey devs! Managing state in React Native applications is crucial for ensuring a consistent and efficient user experience. There are various libraries and approaches available to facilitate this process, each with its own strengths and specific use cases. In this article, we will explore the key differences between Redux, Context API, MobX, and Zustand, highlighting their strengths, code examples, and how to apply them to your project. #### Redux Redux is a predictable state management library for JavaScript, widely adopted in the React Native community. It centralizes the application state in a single store, making it easier to trace and maintain. **Strengths** - **Predictability**: Centralized state that facilitates tracing and debugging. - **DevTools**: Redux DevTools for state inspection and manipulation. - **Community and Support**: Large community and extensive documentation. **Weaknesses** - **Complexity**: Requires significant setup and boilerplate. - **Learning Curve**: Can be challenging for beginners due to its robust structure. **Code Example** **store.js** ```javascript // store.js import { createStore } from 'redux'; // action types const INCREMENT = 'INCREMENT'; const DECREMENT = 'DECREMENT'; // actions export const increment = () => ({ type: INCREMENT }); export const decrement = () => ({ type: DECREMENT }); // reducer const initialState = { count: 0 }; const counterReducer = (state = initialState, action) => { switch (action.type) { case INCREMENT: return { count: state.count + 1 }; case DECREMENT: return { count: state.count - 1 }; default: return state; } }; // store export const store = createStore(counterReducer); ``` **App.js** ```javascript // App.js import React from 'react'; import { Provider, useSelector, useDispatch } from 'react-redux'; import { store, increment, decrement } from './store'; const Counter = () => { const count = useSelector(state => state.count); const dispatch = useDispatch(); return ( <div> <p>{count}</p> <button onClick={() => dispatch(increment())}>Increment</button> <button onClick={() => dispatch(decrement())}>Decrement</button> </div> ); }; const App = () => ( <Provider store={store}> <Counter /> </Provider> ); export default App; ``` #### Context API The Context API is a React native solution for passing data through the component tree without having to manually pass props at each level. **Strengths** - **Native**: An integral part of React, no need for external dependencies. - **Simplicity**: Easy to use for simple cases of state management. **Weaknesses** - **Performance**: Can cause unnecessary renderings if not used properly. - **Scalability**: Less suitable for large applications with many complex states. **Code Example** **CountContext.js** ```javascript // CountContext.js import React, { createContext, useState, useContext } from 'react'; const CountContext = createContext(); export const CountProvider = ({ children }) => { const [count, setCount] = useState(0); const increment = () => setCount(count + 1); const decrement = () => setCount(count - 1); return ( <CountContext.Provider value={{ count, increment, decrement }}> {children} </CountContext.Provider> ); }; export const useCount = () => useContext(CountContext); ``` **App.js** ```javascript // App.js import React from 'react'; import { CountProvider, useCount } from './CountContext'; const Counter = () => { const { count, increment, decrement } = useCount(); return ( <div> <p>{count}</p> <button onClick={increment}>Increment</button> <button onClick={decrement}>Decrement</button> </div> ); }; const App = () => ( <CountProvider> <Counter /> </CountProvider> ); export default App; ``` #### MobX MobX is a library that allows reactive state management, where components are automatically updated when the state changes. **Strengths** - **Reactivity**: Automatic updates based on state changes. - **Simplicity**: Less boilerplate compared to Redux for reactive states. **Weaknesses** - **Magic**: Can be less predictable due to its reactive nature. - **Adoption**: Less adopted compared to Redux, resulting in fewer resources and tools available. **Code Example** **store.js** ```javascript // store.js import { makeAutoObservable } from 'mobx'; class CounterStore { count = 0; constructor() { makeAutoObservable(this); } increment = () => { this.count += 1; }; decrement = () => { this.count -= 1; }; } export const counterStore = new CounterStore(); ``` **App.js** ```javascript // App.js import React from 'react'; import { observer } from 'mobx-react'; import { counterStore } from './store'; const Counter = observer(() => ( <div> <p>{counterStore.count}</p> <button onClick={counterStore.increment}>Increment</button> <button onClick={counterStore.decrement}>Decrement</button> </div> )); const App = () => ( <div> <Counter /> </div> ); export default App; ``` #### Zustand Zustand is a lightweight and simple state management library for React, offering high performance and an intuitive API. **Strengths** - **Simplicity**: Simple and easy-to-understand API. - **Performance**: High performance with optimized renderings. - **Size**: Lightweight with minimal overhead. **Weaknesses** - **Community**: Less adoption compared to Redux and Context API. - **Documentation**: Less extensive compared to more established libraries. **Code Example** **useStore.js** ```javascript // useStore.js import create from 'zustand'; const useStore = create(set => ({ count: 0, increment: () => set(state => ({ count: state.count + 1 })), decrement: () => set(state => ({ count: state.count - 1 })), })); export default useStore; ``` **App.js** ```javascript // App.js import React from 'react'; import useStore from './useStore'; const Counter = () => { const { count, increment, decrement } = useStore(); return ( <div> <p>{count}</p> <button onClick={increment}>Increment</button> <button onClick={decrement}>Decrement</button> </div> ); }; const App = () => ( <div> <Counter /> </div> ); export default App; ``` Each of the presented libraries β€” Redux, Context API, MobX, and Zustand β€” offers a unique way to manage state in React Native applications. The choice between them depends on your project's specific needs, such as complexity, performance, and personal preferences. I hope this guide has helped you better understand your options and make informed decisions when choosing a state management solution for your React Native projects. ### References 1. [Redux](https://redux.js.org/) 2. [React Context API](https://reactjs.org/docs/context.html) 3. [MobX](https://mobx.js.org/) 4. [Zustand](https://zustand.surge.sh/)
paulocappa
1,910,927
Automating the Creation of Node.js Servers with a Central API using Nuxt 3
In the current software development scenario, where agility and efficiency are crucial, automating...
0
2024-07-03T23:40:30
https://dev.to/wgbn/automating-the-creation-of-nodejs-servers-with-a-central-api-using-nuxt-3-ckm
api, node, nuxt, nginx
In the current software development scenario, where agility and efficiency are crucial, automating repetitive processes becomes not only desirable, but essential. I recently faced a common challenge among developers: the need to configure and deploy multiple Node.js servers quickly and consistently. To solve this problem, I developed a solution using a central API built with Nuxt 3, which automates the entire process of creating and configuring Node.js servers. This approach not only significantly simplifies the deployment process, but also drastically reduces the time spent on manual tasks and minimizes the possibility of human error. ### The Challenge in Detail As a full-stack developer, I was often faced with the repetitive and error-prone task of manually configuring new Node.js servers. This process involved a series of meticulous steps: 1. **Creation of Git Repositories for deployment**: Configure bare Git repositories, used on servers to update the code in production as part of the deployment pipeline, for each new project, facilitating the deployment process. 2. **Configuring Git Hooks**: Implement custom hooks to automate post-receive tasks such as compiling code and restarting services. 3. **Process Management with PM2**: Add and configure new applications in PM2, a robust process manager for Node.js applications, ensuring that services remain active and are automatically restarted in case of failures. 4. **Nginx Configuration**: Create and activate Nginx configurations for each new service, establishing an efficient reverse proxy and managing traffic routing. 5. **Services Restart**: Ensure that all affected services, especially Nginx, were properly restarted to apply the new settings. Each of these tasks required SSH access to the server and execution of a series of specific commands. This not only consumed precious time, but also significantly increased the chances of configuration errors, which could lead to deployment issues or, worse, security vulnerabilities. ### The Solution: A Central Automation API with Nuxt 3 To overcome these challenges, I developed a robust and flexible core API using the Nuxt 3 framework. The choice of Nuxt 3 was strategic, as it was a recent requirement to use in the company I work for, in addition to its ability to create efficient APIs through H3 , a lightweight and fast HTTP framework. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yfxucv7vol3lvz54dns.jpg) Nuxt 3 offers several advantages that make it ideal for this type of project: 1. **Modern Framework**: Nuxt 3 is built with TypeScript and natively supports ESM (ECMAScript Modules), providing a modern, typed development environment. 2. **Performance**: With its optimized build system and server-side rendering (SSR) support, Nuxt 3 offers excellent performance. 3. **API Routes**: Nuxt 3 simplifies the creation of RESTful APIs through its API routes system, which uses H3 internally. 4. **Ecosystem**: Deep integration with the Vue.js ecosystem allows you to take advantage of a wide range of plugins and modules. ### H3: The Heart of the API H3, the HTTP framework used by Nuxt 3 for its API routes, deserves a special mention. Unlike the Express, the H3 is designed to be extremely lightweight and efficient, offering: - **Low overhead**: The H3 is minimalist by design, reducing memory consumption and improving boot times. - **Universal compatibility**: Works in different environments, including serverless, workers and traditional Node.js. - **Modern API**: Uses Promises and async/await natively, simplifying the handling of asynchronous operations. ## Detailed Implementation The core API implementation was carried out using Nuxt 3, taking advantage of its API routes capabilities and the efficiency of H3. Let's explore some key components of the implementation: ### Project Structure ``` project-root/ β”œβ”€β”€server/ β”‚ β”œβ”€β”€ api/ β”‚ β”‚ β”œβ”€β”€ nginx/ | | | β”œβ”€β”€ activate.post.ts | | | β”œβ”€β”€ reload.get.ts | | | └── sites.post.ts β”‚ β”‚ β”œβ”€β”€ pm2/ | | | └── apps.post.ts β”‚ β”‚ └── repos/ | | β”œβ”€β”€ hooks.post.ts | | └── index.post.ts | β”œβ”€β”€ middleware/ | | └── auth.ts | β”œβ”€β”€ plugins/ | | └── init.ts β”‚ └── utils/ | └── execCommand.ts β”œβ”€β”€ nuxt.config.ts └── package.json ``` The objective of this article is not to detail the implementation of each endpoint, middleware or plugin, but rather to present the general idea and some key implementation solutions. We want to provoke the developer who reads it to complement the project with their own ideas. Here we will only address the excerpts that I considered most interesting and relevant to specify. ### Shell Command Execution A crucial component of the implementation is the `execShellCommand` function, which allows the safe execution of shell commands. This function has been implemented in `server/utils/execCommand.ts`: ```typescript import { exec } from 'child_process' export default function execShellCommand(cmd: string) { return new Promise((resolve, reject) => { child_process.exec(cmd, (error, stdout, stderr) => { if (error) reject(stderr) else resolve(stdout) }) }) } ``` ### Implementation of Endpoints Let's look at the implementation of the endpoint for adding applications to PM2, located at `server/api/apps.post.ts`: ```typescript import execShellCommand from '~/server/utils/execCommand' export default defineEventHandler(async (event: any) => { console.log('[POST] /api/pm2/apps') const body = await readBody(event) if (!body || !body.appName || !body.appScript || !body.appPath) { setResponseStatus(event, 400) return { success: false, error: 'missing params' } } try { let pm2Command = `pm2 start ${body.appScript} --name ${body.appName}` if (body.appPath) pm2Command += ` --cwd ${body.appPath}` await execShellCommand(pm2Command) return { success: true, message: `App '${body.appName}' added!` } } catch (error: any) { console.log(error.message) setResponseStatus(event, 500) return { success: false, error: 'PM2 Error' } } }) ``` In this example, we can see how H3 simplifies the handling of requests and responses through `defineEventHandler`. The `readBody` function is used to extract and validate request data asynchronously. ### Nginx Configuration The endpoint for creating and activating Nginx configurations demonstrates how to handle file system operations and executing shell commands in sequence: ```typescript import * as fs from 'fs' export default defineEventHandler(async (event: any) => { console.log('[POST] /api/nginx/sites') const body = await readBody(event) if (!body || !body.siteName || !body.siteConfig) { setResponseStatus(event, 400) return { success: false, error: 'missing params' } } const availableSitesPath = '/etc/nginx/sites-available' const newSiteFilePath = `${availableSitesPath}/${body.siteName}.conf` try { const siteExists = await fs.promises.access(newSiteFilePath, fs.constants.F_OK) .then(() => true) .catch(() => false) if (siteExists) { setResponseStatus(event, 409) return { success: false, error: `'${body.siteName}' already exists` } } await fs.promises.writeFile(newSiteFilePath, body.siteConfig) return { success: true, message: `Config '${body.siteName}' created!` } } catch (error: any) { console.log(error.message) setResponseStatus(event, 500) return { error: 'Error on creating site' } } }) ``` This endpoint demonstrates how Nuxt 3 and H3 enable smooth integration between asynchronous file system operations and shell command execution, all within a single event handler. ## In-Depth Security Considerations ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eldnq9tp391fotfipvip.jpg) When developing an API with such a level of control over the server, security becomes a primary concern. Let’s explore some essential security measures in detail: 1. **Robust Authentication and Authorization**: - Implement a JWT authentication system (JSON Web Tokens) for all API routes. - Use authorization middleware to check specific permissions for each endpoint. - Consider implementing a role system for more granular access control. 2. **Strict Input Validation**: - Use libraries such as `zod` or `joi` for schema validation of input data. - Sanitize all inputs to prevent command injection and XSS attacks. - Implement rate limiting to prevent brute force attacks. 3. **Principle of Least Privilege**: - Create a dedicated user on the operating system with strictly necessary permissions. - Use `sudo` with specific commands instead of giving full root access. - Implement a whitelist system for allowed commands. 4. **Monitoring and Auditing**: - Implement detailed logging of all actions performed by the API. - Use a monitoring service like Datadog or New Relic for real-time alerts. - Perform regular audits of logs and security configurations. 5. **HTTPS and Network Security**: - Ensure that all communication with the API is done via HTTPS. - Implement CORS (Cross-Origin Resource Sharing) in a restrictive way. - Consider using a VPN for API access in production environments. 6. **Secure Secret Management**: - Use environment variables or a secret management service such as AWS Secrets Manager or HashiCorp Vault. - Never store passwords or keys directly in code or in versioned configuration files. 7. **Updates and Patches**: - Keep all packages and dependencies updated regularly. - Implement a CI/CD process that includes automatic security checks. ## Conclusion and Final Reflections Implementing this core automation API using Nuxt 3 and H3 has significantly transformed my Node.js server deployment workflow. Tasks that previously required manual SSH access and executing multiple commands can now be accomplished with a simple API call, drastically reducing configuration time and minimizing human error. The choice of Nuxt 3 as the framework for this solution proved to be the right one, offering an ideal balance between performance, ease of development and flexibility. Native integration with H3 for API routes provided a solid and efficient foundation for building the required endpoints. However, it is crucial to highlight that an API with this level of control over the server represents both a powerful tool and a significant responsibility. Implementing robust security measures is not only recommended, but absolutely essential. Each endpoint must be treated as a potential attack vector, and security must be a primary consideration at each stage of API development and operation. Looking to the future, I see several possibilities for expanding and improving this solution: 1. **Integration with Orchestration Systems**: Consider integration with tools like Kubernetes or Docker Swarm for large-scale container management. 2. **Webhooks Implementation**: Add webhooks support to notify external systems about important events, such as the successful creation of a new server. 3. **User Interface**: Develop a friendly user interface using Vue.js to complement the API, making server management even easier. 4. **Expansion to Other Services**: Extend functionality to cover other services beyond Node.js, such as databases or cache servers. In conclusion, this solution not only optimized my work process, but also opened up new possibilities for automation and infrastructure management. With appropriate security precautions, I believe similar approaches can significantly benefit development and operations teams, promoting a more efficient and agile DevOps culture.
wgbn
1,910,924
Automating User and Group Management with a Bash Script.
As a SysOps engineer, one of your responsibilities is to streamline and automate the process of user...
0
2024-07-03T23:37:38
https://dev.to/thelma_ocansey/automating-user-and-group-management-with-a-bash-script-2b64
As a SysOps engineer, one of your responsibilities is to streamline and automate the process of user management. With the recent influx of new developers at our company, managing user accounts and their group memberships efficiently is crucial. In this article, I’ll walk you through a Bash script that automates the creation of users, assigns them to groups, and sets up their home directories with proper permissions. Additionally, the script logs all actions and securely stores the generated passwords. This task is part of the HNG Internship, a fantastic program that helps interns gain real-world experience. You can learn more about the program at the [HNG Internship website](https://hng.tech/internship) or consider hiring some of their talented interns through the [HNG Hire page](https://hng.tech/hire). The source code can be found on my https://github.com/nthelma30/Create_User.sh.git ### The Task We need to create a Bash script called `create_users.sh` that: 1. Reads a text file with usernames and groups. 2. Creates users and groups as specified. 3. Sets up home directories with appropriate permissions. 4. Generates random passwords for the users. 5. Logs all actions to `/var/log/user_management.log`. 6. Stores the generated passwords securely in `/var/secure/user_passwords.csv`. ### Input File Format The input file should contain lines formatted as `user;groups`, where `user` is the username, and `groups` is a comma-separated list of group names. For example: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` ### The Script Here is the `create_users.sh` script: ```bash #!/bin/bash # Check if the input file exists if [ ! -f "$1" ]; then echo "Error: Input file not found." exit 1 fi # Ensure log and secure directories are initialized once LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Initialize log file if [ ! -f "$LOG_FILE" ]; then sudo touch "$LOG_FILE" sudo chown root:root "$LOG_FILE" sudo chmod 600 "$LOG_FILE" fi # Initialize password file if [ ! -f "$PASSWORD_FILE" ]; then sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chown root:root "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" fi # Redirect stdout and stderr to the log file exec > >(sudo tee -a "$LOG_FILE") 2>&1 # Function to check if user exists user_exists() { id "$1" &>/dev/null } # Function to check if a group exists group_exists() { getent group "$1" > /dev/null 2>&1 } # Function to check if a user is in a group user_in_group() { id -nG "$1" | grep -qw "$2" } # Read each line from the input file while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Check if the user already exists if user_exists "$username"; then echo "User $username already exists." else # Create user sudo useradd -m "$username" # Generate random password password=$(openssl rand -base64 12) # Set password for user echo "$username:$password" | sudo chpasswd # Log actions echo "User $username created. Password: $password" # Store passwords securely echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" fi # Ensure the user's home directory and personal group exist sudo mkdir -p "/home/$username" sudo chown "$username:$username" "/home/$username" # Ensure the user's personal group exists if ! group_exists "$username"; then sudo groupadd "$username" fi sudo usermod -aG "$username" "$username" # Split the groups string into an array IFS=',' read -ra group_array <<< "$groups" # Check each group for group in "${group_array[@]}"; do if [[ -n "$group" ]]; then if group_exists "$group"; then echo "Group $group exists." else echo "Group $group does not exist. Creating group $group." sudo groupadd "$group" fi if user_in_group "$username" "$group"; then echo "User $username is already in group $group." else echo "Adding user $username to group $group." sudo usermod -aG "$group" "$username" fi fi done done < "$1" ``` ### Explanation 1. **Input Validation**: The script starts by checking if the input file exists. If not, it exits with an error message. 2. **Log and Password Files**: It ensures that the log file (`/var/log/user_management.log`) and the password file (`/var/secure/user_passwords.csv`) are created with appropriate permissions. 3. **Functions**: - `user_exists()`: Checks if a user already exists. - `group_exists()`: Checks if a group already exists. - `user_in_group()`: Checks if a user is a member of a group. 4. **Processing the Input File**: The script reads each line from the input file, trims whitespace, and processes the usernames and groups. 5. **User Creation**: It creates the user if they do not already exist, generates a random password using `openssl rand -base64 12`, and sets the password. 6. **Home Directory and Personal Group**: It ensures the user's home directory and personal group (named the same as the username) exist and sets the appropriate permissions. 7. **Group Membership**: It processes each group, creating it if it doesn’t exist, and adds the user to the group. ### Conclusion This script provides an efficient way to manage user accounts and their group memberships, ensuring that all actions are logged and passwords are stored securely. By automating these tasks, SysOps engineers can save time and reduce the risk of errors. For more information about the HNG Internship and its benefits, visit the [HNG Internship website](https://hng.tech/internship) or consider hiring from their pool of talented interns through the [HNG Hire page](https://hng.tech/hire). -- This article can be adapted to your specific requirements or company guidelines. Written by Thelma Ocansey
thelma_ocansey
1,910,923
SOLVING CORS ERROR
INTRODUCTION As a Mass Communication graduate turned software developer, my coding journey has been...
0
2024-07-03T23:25:36
https://dev.to/ijayyyy/solving-cors-error-25i9
cors, beginners, typescript, react
**INTRODUCTION** As a Mass Communication graduate turned software developer, my coding journey has been filled with both challenges and rewards. This article will highlight the challenge I faced solving a server-side problem related to CORS (Cross-Origin Resource Sharing). I will briefly give the step-by-step approach I took to resolve this issue and ensure my application was running smoothly. **THE PROBLEM** The issue I encountered was a CORS error when trying to sign up users through my application. I used React TypeScript for my project. The client side of my application was hosted on Netlify, while the server side was hosted on Render. Despite configuring CORS on the server side of my application, I kept receiving errors. Here is a breakdown of how I resolved this issue. * Step 1: Identifying the CORS Error. When users attempted to sign up, the browser's console showed that the server was not properly handling CORS requests. It showed this error: ``` Access to XMLHttpRequest at '' from origin '' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. ``` * Step 2: Configuring CORS on the Server. I had to make sure the server was correctly configured to handle CORS requests. I configured the server to allow requests from my client’s URL on Netlify. * Step 3: Rebuilding and Redeploying the Server. After making changes to the server configuration, it was important to recompile the TypeScript files into JavaScript and redeploy the server. `tsc` * Step 4: Ensuring Proper Endpoints on the Client Side. On the client side, I updated the API endpoint to point to the correct URL. This was where the error was, I forgot to add "/api/" to my baseUrl. * Step 5: Verifying and Testing After deploying the changes, I tested the application to ensure that the CORS errors were resolved and that the signup and login worked as expected. This involved: 1. Testing the signup and login from the client application. 2. Ensuring that the server received and processed the requests correctly. 3. Checking the browser console and server logs for any other errors. **Conclusion** Solving the CORS and deployment issue was a significant learning experience. The error I encountered may not look like much, but it took me hours to figure it out, lol. This points out the importance of proper server configuration and thorough testing. If you encounter similar issue, remember to: 1. Check and configure your server for CORS. 2. Ensure your client-side API calls the correct endpoints. 3. Test your application after making changes. **HNG INTERNSHIP AND ME** I recently embarked on the HNG Internship, a very competitive program designed to bring out the best in software developers. The internship offers an amazing opportunity to collaborate with experienced mentors and meet other smart interns. Do check out the HNG Internship program, it is an incredible platform to start your career in tech! Use any of the links below: (https://hng.tech/premium) (https://hng.tech/internship)
ijayyyy
1,910,922
Day 984 : Reason Why
liner notes: Professional : Pretty chill day. Had a couple of meetings, responded to some community...
0
2024-07-03T23:25:03
https://dev.to/dwane/day-984-reason-why-4lm6
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : Pretty chill day. Had a couple of meetings, responded to some community questions and worked on refactoring an application. - Personal : Went through tracks for the radio show. Looked at some land. Did some work on a logo. That was pretty much it. Went to sleep. ![An aerial view of Peyto Lake in Banff National Park, Canada. The lake is a bright turquoise color, and the surrounding mountains are snow-capped. The image is taken from a high vantage point, so the lake and mountains appear small. The sky is clear and blue. The lake is partially frozen over.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32l7l7cisepm9fjksk87.jpg) So I have tomorrow off and will probably stay up later than normal. Going to eat dinner. Work on social media posts for Friday. Put together the playlist for the radio show. Add some text to the logo I'm working on. Watching an episode of "Demon Slayer" to end the night. I'm thinking tomorrow, I'll actually do some coding on a side project. haha Been coding so much at work, I find other non-coding things to work on. No real reason why. Need to get back to it. So yeah, no post tomorrow because it's 4th of July. Try not to blow yourself up! haha Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube wPlxlyFsh2w %}
dwane
1,910,919
Automate Manage Users and Groups with a Bash Script
Okay, we are here to transform your SysOps workflow with a simple Bash script that automates user and...
0
2024-07-03T23:23:47
https://dev.to/muad/automate-manage-users-and-groups-with-a-bash-script-39bf
bash, linux, sysop, automation
Okay, we are here to transform your SysOps workflow with a simple Bash script that automates user and group management effortlessly. This blog post is part of a series aimed to preserving kowledge for my future self study guide by creating more project of this nature helping me connect with like minded techies. Join me as we explore how to automate user and group management seamlesssly with a Bash script. Let's get started! ### Follow these prerequisites and you're good to go executing your bash script ####- Linux Environment: Choose a Linux environment that suits your needs. For this blog, I'm using <u>Kali OS version 2022.4</u> (Debian-based). ####- Install Visual Studio Code (Optional, you can execute the script in terminal using editors like Vim, Vi, or Nano) ####- Create Necessary Directories: Ensure the directories required by the script are available. `sudo mkdir -p /var/secure/` ####- Set Permissions: Set appropriate permissions for the directories and files. `sudo touch /var/log/user_management.log` `sudo touch /var/secure/user_passwords.csv` `sudo chmod 600 /var/secure/user_passwords.csv` ####- Install Required Utilities: Make sure all necessary utilities are installed. These are usually pre-installed, but you can always check them again. `sudo apt-get update` `sudo apt-get install passwd coreutils openssl` ####- Prepare your Bash File: - Shebang ![Preparing your Bash file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e4g02ne7oimd078g5uy.png) `#!/bin/bash` Specifies the code runs in Bash shell - Specify your script to run in root ![Must run as root](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtdjfzdgwyvs527py5f9.png) `if [ "$EUID" -ne 0 ]; then echo "This script must be run as root" exit 1 fi` It will alert an error message if you don't use script the code in root - Function to log actions timestamp `log_action() { echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" }` Logs each record with a timestamp to the log file. The tee -a command appends the message to the log file and also displays it on the terminal. - Function to create a user and assign groups `create_user() { local username="$1" local groups="$2"` When calling create_user function with two arguments, the first argument will be stored in the username variable which also represents the ('$1') as first argument and the second argument will be stored in the groups variable ('$1'). - Trim whitespace `username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs)` Trims any leading or trailing whitespace from the username and groups variables while using the xargs command. - Checks against logs if user already exist in the logs (file) `if id "$username" &>/dev/null; then log_action "User $username already exists. (No action taken)" return fi` This code checks if the user specified by username exists, and if so, logs a message and exits the function without taking any further action. - Create group for the user if it doesn't exist `if ! getent group "$username" &>/dev/null; then groupadd "$username" log_action "Group $username created." fi` It simply creates new user group if group is missing. - Create user with home directory and add group `useradd -m -g "$username" -s /bin/bash "$username" log_action "User $username created with directory and group $username."` It simply creates new user. - Generate a rand passwrd for new user `password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd log_action "Password for $username."` This code generates a random password for the new user, sets it, and logs the action. - Securly store passwrd `echo "$username,$password" | tee -a "$PASSWORD_FILE"` the values stored in the $username and $password combines them into a single string separated by a comma, and then appends this string to the file specified by $PASSWORD_FILE and it prints on the terminal as well. - Add user to additional groups if specified `if [ -n "$groups" ]; then IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi usermod -aG "$group" "$username" log_action "User $username added to group $group." done fi }` - Check if the input file is provided or manual flag is used `if [ -z "$1" ]; then echo "Usage: $0 <textfile.txt> or $0 --manual" exit 1 fi` - Define log action and password stroage `LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv"` - Ensure the directories and log file exist `mkdir -p /var/secure/ touch "$LOG_FILE" touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE"` - Debugging output `echo "DEBUG: LOG_FILE=$LOG_FILE" echo "DEBUG: PASSWORD_FILE=$PASSWORD_FILE" if [ "$1" == "--manual" ]; then echo "Enter the username:" read username echo "Enter the groups ( use comma seperation for more than ONE group (e.g: dev,devops)):" read groups create_user "$username" "$groups" else INPUT_FILE=$1 while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$INPUT_FILE" fi` ` log_action "User creation successful." echo "User creation successful"` ####- Reads a text file of employee usernames and group names, formatted as user;groups per line: ![Creating textfile.txt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tuxl373sg8nhe4yc1ou8.png) Create a text file containing the usernames and groups in the format user;groups. Each line should represent a user and their associated groups. Include a plain text file called 'textfile.txt' and add this data: `username1;a,admin username2;dev username3;devops,admin` ####- Script Execution: Script executable command then run it. Your 'create_users.sh' file head to the directory your file is saved and then execute the command. `chmod +x create_users.sh` Here we have two method to run the bash script OPTION 1 - `SUDO ./create_users.sh textfile.txt` OPTION 2 - `SUDO ./create_users.sh --manual` ####- Results in command line -When using command sudo ./create_users.sh textfile.txt here is the result. ![creating new users and group result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atnjq1g72my3h1jq29l1.png) When using command sudo ./create_users.sh --manual here is the result. ![using manual input](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kjb1pj9r3h05ra39952.png) ####- Testing exisitng created User Rejected input when exisitng user was found in the logs. ![Not updated to the data log](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5tq3kkhr9tawnzoe4oq.png) ###- Join Us if you want to be part of the team HNG Start your journey to becoming a world-class developer today! https://hng.tech HNG Premium Learn from the best in the game and become world-class! https://hng.tech/premium
muad
1,910,920
Automate Manage Users and Groups with a Bash Script
Okay, we are here to transform your SysOps workflow with a simple Bash script that automates user and...
0
2024-07-03T23:23:47
https://dev.to/muad/automate-manage-users-and-groups-with-a-bash-script-3864
bash, linux, sysop, automation
Okay, we are here to transform your SysOps workflow with a simple Bash script that automates user and group management effortlessly. This blog post is part of a series aimed to preserving kowledge for my future self study guide by creating more project of this nature helping me connect with like minded techies. Join me as we explore how to automate user and group management seamlesssly with a Bash script. Let's get started! ### Follow these prerequisites and you're good to go executing your bash script ####- Linux Environment: Choose a Linux environment that suits your needs. For this blog, I'm using <u>Kali OS version 2022.4</u> (Debian-based). ####- Install Visual Studio Code (Optional, you can execute the script in terminal using editors like Vim, Vi, or Nano) ####- Create Necessary Directories: Ensure the directories required by the script are available. `sudo mkdir -p /var/secure/` ####- Set Permissions: Set appropriate permissions for the directories and files. `sudo touch /var/log/user_management.log` `sudo touch /var/secure/user_passwords.csv` `sudo chmod 600 /var/secure/user_passwords.csv` ####- Install Required Utilities: Make sure all necessary utilities are installed. These are usually pre-installed, but you can always check them again. `sudo apt-get update` `sudo apt-get install passwd coreutils openssl` ####- Prepare your Bash File: - Shebang ![Preparing your Bash file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e4g02ne7oimd078g5uy.png) `#!/bin/bash` Specifies the code runs in Bash shell - Specify your script to run in root ![Must run as root](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtdjfzdgwyvs527py5f9.png) `if [ "$EUID" -ne 0 ]; then echo "This script must be run as root" exit 1 fi` It will alert an error message if you don't use script the code in root - Function to log actions timestamp `log_action() { echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" }` Logs each record with a timestamp to the log file. The tee -a command appends the message to the log file and also displays it on the terminal. - Function to create a user and assign groups `create_user() { local username="$1" local groups="$2"` When calling create_user function with two arguments, the first argument will be stored in the username variable which also represents the ('$1') as first argument and the second argument will be stored in the groups variable ('$1'). - Trim whitespace `username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs)` Trims any leading or trailing whitespace from the username and groups variables while using the xargs command. - Checks against logs if user already exist in the logs (file) `if id "$username" &>/dev/null; then log_action "User $username already exists. (No action taken)" return fi` This code checks if the user specified by username exists, and if so, logs a message and exits the function without taking any further action. - Create group for the user if it doesn't exist `if ! getent group "$username" &>/dev/null; then groupadd "$username" log_action "Group $username created." fi` It simply creates new user group if group is missing. - Create user with home directory and add group `useradd -m -g "$username" -s /bin/bash "$username" log_action "User $username created with directory and group $username."` It simply creates new user. - Generate a rand passwrd for new user `password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd log_action "Password for $username."` This code generates a random password for the new user, sets it, and logs the action. - Securly store passwrd `echo "$username,$password" | tee -a "$PASSWORD_FILE"` the values stored in the $username and $password combines them into a single string separated by a comma, and then appends this string to the file specified by $PASSWORD_FILE and it prints on the terminal as well. - Add user to additional groups if specified `if [ -n "$groups" ]; then IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi usermod -aG "$group" "$username" log_action "User $username added to group $group." done fi }` - Check if the input file is provided or manual flag is used `if [ -z "$1" ]; then echo "Usage: $0 <textfile.txt> or $0 --manual" exit 1 fi` - Define log action and password stroage `LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv"` - Ensure the directories and log file exist `mkdir -p /var/secure/ touch "$LOG_FILE" touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE"` - Debugging output `echo "DEBUG: LOG_FILE=$LOG_FILE" echo "DEBUG: PASSWORD_FILE=$PASSWORD_FILE" if [ "$1" == "--manual" ]; then echo "Enter the username:" read username echo "Enter the groups ( use comma seperation for more than ONE group (e.g: dev,devops)):" read groups create_user "$username" "$groups" else INPUT_FILE=$1 while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$INPUT_FILE" fi` ` log_action "User creation successful." echo "User creation successful"` ####- Reads a text file of employee usernames and group names, formatted as user;groups per line: ![Creating textfile.txt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tuxl373sg8nhe4yc1ou8.png) Create a text file containing the usernames and groups in the format user;groups. Each line should represent a user and their associated groups. Include a plain text file called 'textfile.txt' and add this data: `username1;a,admin username2;dev username3;devops,admin` ####- Script Execution: Script executable command then run it. Your 'create_users.sh' file head to the directory your file is saved and then execute the command. `chmod +x create_users.sh` Here we have two method to run the bash script OPTION 1 - `SUDO ./create_users.sh textfile.txt` OPTION 2 - `SUDO ./create_users.sh --manual` ####- Results in command line -When using command sudo ./create_users.sh textfile.txt here is the result. ![creating new users and group result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atnjq1g72my3h1jq29l1.png) When using command sudo ./create_users.sh --manual here is the result. ![using manual input](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kjb1pj9r3h05ra39952.png) ####- Testing exisitng created User Rejected input when exisitng user was found in the logs. ![Not updated to the data log](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5tq3kkhr9tawnzoe4oq.png) ###- Join Us if you want to be part of the team HNG Start your journey to becoming a world-class developer today! https://hng.tech HNG Premium Learn from the best in the game and become world-class! https://hng.tech/premium
muad
1,899,205
Como reutilizar sua UI
Um guia para criar seu Button lindamente
0
2024-07-03T23:18:43
https://dev.to/andrewmat/como-reutilizar-sua-ui-4cpp
braziliandevs, frontend, ui, designsystem
--- title: Como reutilizar sua UI published: true description: Um guia para criar seu Button lindamente tags: braziliandevs, frontend, ui, designSystem # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-03 20:18 -0300 --- Basicamente todo mundo que conheΓ§o acaba criando componentes reutilizΓ‘veis para reaproveitar o trabalho feito. Essa Γ© uma abstração enganosamente simples: Γ© fΓ‘cil criar um Button, mas Γ© surpreendentemente difΓ­cil conseguir reutilizar ele em todos os casos de uso em um projeto. Eu aprendi algumas coisas com o passar dos anos, e vejo que vΓ‘rias pessoas acabam metendo o pΓ© na jaca sem perceber. Quero compartilhar algumas dicas para evitar que algumas jacas sejam pisadas. > Tenho um viΓ©s de React mas tudo pode ser aplicado em outros contextos, desde que tenha componentização da UI. ## Acesso rΓ‘pido * [Evite margens](#1-evite-margens) * [Se inspire em props HTML](#2-se-inspire-em-props-html) * [Componhe componentes](#3-componhe-componentes) * [Utilize bibliotecas headless](#4-utilize-bibliotecas-headless) * [Evite dependΓͺncias externas](#5-evite-dependΓͺncias-externas) * [Extra: Migração](#extra-6-migração) * [Juntando tudo](#juntando-tudo) ## 1) Evite margens [πŸ”—](#1-evite-margens) Colocar margens no seu componente acaba quebrando a expectativa de reuso do componente. EspaΓ§amento Γ© algo importante, e seu componente nΓ£o conhece onde vai ser aplicado. Ao invΓ©s disso, deixe a responsabilidade de espaΓ§amento para o pai que contΓ©m os componentes. Γ‰ possΓ­vel que ele utilize uma forma nova para gerenciar espaΓ§amentos, como paddings e gaps. Existem vΓ‘rias conversas [ΒΉ](https://markus.oberlehner.net/blog/css-the-spacing-between-elements-should-be-determined-by-the-parent-element/) [Β²](https://mxstbr.com/thoughts/margin) [Β³](https://youtu.be/jnV1u67_yVg?si=pTHh4k06pMdu6WXU&t=582) sobre como deve ser criado espaΓ§amentos nos componentes. Mas geralmente a conclusΓ£o Γ© nΓ£o colocar espaΓ§amento externo. Γ‰ mais confiΓ‘vel que seu componente pai lide com margens. E falando no componente pai... ## 2) Se inspire em props HTML [πŸ”—](#2-se-inspire-em-props-html) Γ‰ importante que componentes sejam utilizados de uma forma esperada. Utilizar uma interface que todo mundo conhece ajuda nisso, e os atributos HTML sΓ£o uma interface conhecida. Caso queira inserir uma margem no componente, ao invΓ©s de criar uma prop dedicada Γ  margem seu componente pode fazer como uma `div` genΓ©rica e aceitar estilos e classes como prop. O componente pai vai ter o contexto e o controle para estender seu componente para casos nΓ£o esperados. "Mas esse padrΓ£o Γ© muito comum no projeto, vou ter que declarar sempre no pai?". Algumas props podem ser colocadas **a mais** com as props tradicionais ainda tendo destaque. Geralmente Γ© uma prop `variant`, `size`, ou similar. Recomendo dar uma olhada em libs como [CVA](https://cva.style/docs) ou [vanilla-extract](https://vanilla-extract.style/documentation/api/style-variants/) pra gerenciar essas props de forma escalΓ‘vel. Se a prop tiver um nome padrΓ£o do HTML ela tambΓ©m dispensa documentação. Para devs novos Γ© mais fΓ‘cil aprender, e para pessoas experientes jΓ‘ tem uma expectativa e isso aumenta a produtividade. E isso inclui nΓ£o sΓ³ o nome de cada prop mas tambΓ©m o valor. Evite criar um contrato muito inovador na prop, como enums e objetos. Ao invΓ©s disso, as strings clΓ‘ssicas e JSX conseguem facilitar o uso. E falando em JSX... ## 3) Componhe componentes [πŸ”—](#3-componhe-componentes) NΓ£o tente criar componentes com muita estrutura. Crie componentes atΓ΄micos que sejam fΓ‘ceis de controlar e reutilizar, e ao invΓ©s de colocar props para pedaΓ§os de UI, use JSX. Por exemplo, jΓ‘ fiz (e vi) props espalhadas no componente: ```jsx <IconButton icon="success" iconSize="16px" iconColor="black" iconPosition="end" >Clique Aqui</Button> ``` Essas props removem o controle de quem vai utilizar seu componente. Caso precise de customização, a pessoa vai ter que modificar o IconButton, o que raramente Γ© uma boa ideia. Caso vocΓͺ separe as responsabilidades e suporte children, o cΓ³digo vai ter uma estrutura clara e quem o utilizar vai ter mais controle do que querem fazer: ```jsx <Button> Clique aqui <SuccessIcon className="icon" /> </Button> <style> .icon { height: 16px; width: 16px; color: black; } </style> ``` Com o tempo, alguns componentes complexos podem ser estruturados mas sempre com base nesses componentes mais simples. Novas estruturas vΓ£o ser criadas com base nesses componentes atΓ΄micos, mesmo assim nΓ£o esconda eles por trΓ‘s dessas estruturas. ## 4) Utilize bibliotecas headless [πŸ”—](#4-utilize-bibliotecas-headless) Temos *muitos* padrΓ΅es de UI que hoje o HTML nΓ£o suporta nativamente e que devs precisam implementar do zero. Um exemplo clΓ‘ssico Γ© o componente de switch. Ele Γ© parecido com o checkbox, mas tanto seu visual quanto sua interação tem suas peculiaridades que diferem do checkbox padrΓ£o do HTML. ![Componente switch de exemplo, com a label "airplane mode" e a posição de desligado](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2a4rnivst1nbhoucgf1l.png) VocΓͺ cria um switch do zero. Γ‰ um exercΓ­cio legal e vocΓͺ faz um bom trabalho. Mas Γ© provΓ‘vel que vocΓͺ nΓ£o lidou corretamente com a navegação por teclado e leitor de tela do componente. VocΓͺ lΓͺ a [especificação no WAI-ARIA](https://www.w3.org/WAI/ARIA/apg/patterns/switch/) e implementa de acordo com ela. Mas agora o cΓ³digo ficou tΓ£o complexo que todo o resto do time tem dificuldades para acompanhar o cΓ³digo, e o ΓΊnico que consegue mexer nele Γ© vocΓͺ. Se outra pessoa mexe no `<Switch />` ela fica insegura sem saber se quebrou algo. O projeto tem dezenas de componente reutilizΓ‘veis cada um trazendo essa dor. Conforme cresce fica mais rΓ‘pido fazer um componente do zero do que modificar o que jΓ‘ existe. Seu componente reutilizΓ‘vel acaba nΓ£o reutilizado. Bibliotecas de componentes prΓ©-feitos vieram pra facilitar esse trabalho. Hoje existem muitas bibliotecas sem estilos, chamadas de headless UI, por exemplo [React Aria](https://react-spectrum.adobe.com/react-aria/components.html), [Radix](https://www.radix-ui.com/primitives) e [Headless UI](https://headlessui.com/). Elas implementam o comportamento esperado e vocΓͺ implementa o estilo. Se preferir, tem como utilizar bibliotecas estilizadas. Eu nΓ£o tive muita oportunidade de utilizar, mas conheΓ§o as populares [Ant Design](https://ant.design/components/overview), [Material UI](https://mui.com/material-ui/all-components/) e [Chakra UI](https://v2.chakra-ui.com/docs/components). A beleza de utilizar componentes prΓ©-prontos Γ© que vocΓͺ cria mais confianΓ§a que a pΓ‘gina esteja acessΓ­vel, sem a necessidade de muito cΓ³digo para manter. ## 5) Evite dependΓͺncias externas [πŸ”—](#5-evite-dependΓͺncias-externas) Os primeiros componentes reutilizΓ‘veis que fiz foram inputs. Como o padrΓ£o do projeto era utilizar [Formik](https://formik.org/), criei os inputs integrados ao Formik. Isso parecia uma decisΓ£o acertada na Γ©poca. Mas conforme o projeto foi crescendo tinham casos que nΓ£o se alinhavam com essa expectativa, e vΓ‘rios dos inputs nΓ£o precisavam do Formik. Isso adicionava uma fricção desnecessΓ‘ria cada vez mais difΓ­cil de lidar, e dificultou a migração para bibliotecas alternativas ao Formik. Se seu componente Γ© realmente reutilizΓ‘vel, ele nΓ£o deve ter expectativas do seu contexto. Quanto mais ele sabe for livre do contexto, mais reutilizΓ‘vel ele Γ©. Podemos estender para qualquer lΓ³gica externa ao componente. Gerenciamento de estado (redux), temas (styled components), validaçáes (react-hook-form), layout (flex/grid), e outras injeçáes de dependΓͺncias. Toda as integraçáes devem ser injetadas por props para nΓ£o engessar seu componente. ## Extra: 6) Migração [πŸ”—](#extra-6-migração) Talvez vocΓͺ fique inspirado agora, mas Γ© possΓ­vel que seu projeto jΓ‘ tenha UI reutilizΓ‘vel. Mudar esses componentes que tΓͺm seu uso generalizado Γ© uma tarefa complicada, e isso desencoraja a mudanΓ§a para um cΓ³digo melhor. NΓ£o desista da mudanΓ§a. DΓͺ um passo de cada vez: * Traga a ideia para o seu time. FaΓ§a provas de conceito, mostre o valor das mudanΓ§as, e evite fazer mudanΓ§as que as pessoas nΓ£o querem fazer. Lembre-se que o cΓ³digo Γ© feito para os humanos. * Tente adaptar ao mΓ‘ximo com o cΓ³digo que jΓ‘ existe. Algumas mudanΓ§as adicionam muito valor ao projeto, e outras mudanΓ§as nΓ£o tem tanto impacto. Escolha seu foco pra facilitar o uso. * Pense em um plano de migração. Γ‰ importante que todos do time estejam alinhados. Adicione o cΓ³digo novo, ajude pessoas a utilizΓ‘-lo, depreque cΓ³digo antigo, e de pouco em pouco remova o cΓ³digo antigo da sua base. --- ## Juntando tudo [πŸ”—](#juntando-tudo) Para terminar, um exemplo de um `<Button />` bΓ‘sico. ```tsx import React, { ComponentPropsWithoutRef, forwardRef } from 'react' import {Link} from 'react-router-dom' import {Slot} from '@radix-ui/react-slot' import clsx from 'clsx' import style from './button.module.css' type Props = ComponentPropsWithoutRef<'button'> & { asChild?: boolean } const Button = forwardRef<HTMLButtonElement, Props>( ({className, asChild, ...props}, ref) => { const Comp = asChild ? Slot : 'button' return ( <Comp className={clsx(style.button, className)} ref={ref} {...props} /> ) } ) ``` Explicando o cΓ³digo acima: * Γ‰ um componente que aceita como props todos os atributos do `<button />` genΓ©rico. * Ele aceita receber `ref` * Ele implementa uma classe `.button` que adiciona os estilos do projeto * Caso vocΓͺ queira aplicar o estilo do botΓ£o em um outro componente, Γ© possΓ­vel atravΓ©s da prop `asChild` ```tsx <Button className="minha-classe">Ver detalhes</Button> <Button type="submit" ref={buttonRef}>Enviar dados</Button> <Button asChild> <Link to="/login">Entrar</Link> </Button> ``` No fim, cΓ³digo reutilizΓ‘vel ajuda bastante, mas ele Γ© uma abstração e portanto deve ser bem pensado. Criar cΓ³digo reutilizΓ‘vel que nΓ£o se adapta a todos os contextos pode criar uma bola de neve. Γ‰ importante utilizar padrΓ΅es jΓ‘ reconhecidos e dar controle para os devs utilizarem de formas nΓ£o esperadas.
andrewmat
1,910,877
Automatizando a Criação de Servidores Node.js com uma API Central usando Nuxt 3
No cenΓ‘rio atual de desenvolvimento de software, onde a agilidade e a eficiΓͺncia sΓ£o cruciais, a...
0
2024-07-03T23:17:35
https://dev.to/wgbn/automatizando-a-criacao-de-servidores-nodejs-com-uma-api-central-usando-nuxt-3-11i6
api, node, nuxt, nginx
No cenΓ‘rio atual de desenvolvimento de software, onde a agilidade e a eficiΓͺncia sΓ£o cruciais, a automação de processos repetitivos se torna nΓ£o apenas desejΓ‘vel, mas essencial. Recentemente, enfrentei um desafio comum entre desenvolvedores: a necessidade de configurar e implantar mΓΊltiplos servidores Node.js de forma rΓ‘pida e consistente. Para resolver esse problema, desenvolvi uma solução utilizando uma API central construΓ­da com Nuxt 3, que automatiza todo o processo de criação e configuração de servidores Node.js. Esta abordagem nΓ£o apenas simplifica significativamente o processo de implantação, mas tambΓ©m reduz drasticamente o tempo gasto em tarefas manuais e minimiza a possibilidade de erros humanos. ### O Desafio em Detalhes Como desenvolvedor full-stack, frequentemente me via diante da tarefa repetitiva e propensa a erros de configurar manualmente novos servidores Node.js. Este processo envolvia uma sΓ©rie de etapas meticulosas: 1. **Criação de RepositΓ³rios Git para deploy**: Configurar repositΓ³rios Git bare, utilizados nos servidores para ataulizar o cΓ³digo em produção fazendo parte do pipeline de deploy, para cada novo projeto, facilitando o processo de implantação. 2. **Configuração de Hooks do Git**: Implementar hooks personalizados para automatizar tarefas pΓ³s-recebimento, como a compilação do cΓ³digo e a reinicialização dos serviΓ§os. 3. **Gerenciamento de Processos com PM2**: Adicionar e configurar novos aplicativos no PM2, um gerenciador de processos robusto para aplicaçáes Node.js, garantindo que os serviΓ§os permaneΓ§am ativos e sejam reiniciados automaticamente em caso de falhas. 4. **Configuração do Nginx**: Criar e ativar configuraçáes do Nginx para cada novo serviΓ§o, estabelecendo um proxy reverso eficiente e gerenciando o roteamento de trΓ‘fego. 5. **Reinicialização de ServiΓ§os**: Garantir que todos os serviΓ§os afetados, especialmente o Nginx, fossem reiniciados adequadamente para aplicar as novas configuraçáes. Cada uma dessas tarefas exigia acesso SSH ao servidor e a execução de uma sΓ©rie de comandos especΓ­ficos. Isso nΓ£o apenas consumia um tempo precioso, mas tambΓ©m aumentava significativamente as chances de erros de configuração, que poderiam levar a problemas de implantação ou, pior, vulnerabilidades de seguranΓ§a. ### A Solução: Uma API Central de Automação com Nuxt 3 Para superar esses desafios, desenvolvi uma API central robusta e flexΓ­vel utilizando o framework Nuxt 3. A escolha do Nuxt 3 foi estratΓ©gica, jΓ‘ que foi uma exigΓͺncia recente usar na empresa em que trabalho, alΓ©m de sua capacidade de criar APIs eficientes atravΓ©s do H3, um framework HTTP leve e rΓ‘pido. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yfxucv7vol3lvz54dns.jpg) Nuxt 3 oferece vΓ‘rias vantagens que o tornam ideal para este tipo de projeto: 1. **Estrutura Moderna**: Nuxt 3 Γ© construΓ­do com TypeScript e oferece suporte nativo a ESM (ECMAScript Modules), proporcionando um ambiente de desenvolvimento moderno e tipado. 2. **Performance**: Com seu sistema de compilação otimizado e suporte a renderização do lado do servidor (SSR), Nuxt 3 oferece excelente performance. 3. **API Routes**: Nuxt 3 simplifica a criação de APIs RESTful atravΓ©s de seu sistema de rotas de API, que utiliza o H3 internamente. 4. **Ecosystem**: A integração profunda com o ecossistema Vue.js permite aproveitar uma vasta gama de plugins e mΓ³dulos. ### H3: O Coração da API O H3, o framework HTTP utilizado pelo Nuxt 3 para suas rotas de API, merece uma menção especial. Diferentemente do Express, o H3 Γ© projetado para ser extremamente leve e eficiente, oferecendo: - **Baixo overhead**: O H3 Γ© minimalista por design, reduzindo o consumo de memΓ³ria e melhorando o tempo de inicialização. - **Compatibilidade universal**: Funciona em diversos ambientes, incluindo serverless, workers e Node.js tradicional. - **API moderna**: Utiliza Promises e async/await nativamente, simplificando o tratamento de operaçáes assΓ­ncronas. ## Implementação Detalhada A implementação da API central foi realizada utilizando Nuxt 3, aproveitando seus recursos de API routes e a eficiΓͺncia do H3. Vamos explorar alguns componentes-chave da implementação: ### Estrutura do Projeto ``` project-root/ β”œβ”€β”€ server/ β”‚ β”œβ”€β”€ api/ β”‚ β”‚ β”œβ”€β”€ nginx/ | | | β”œβ”€β”€ activate.post.ts | | | β”œβ”€β”€ reload.get.ts | | | └── sites.post.ts β”‚ β”‚ β”œβ”€β”€ pm2/ | | | └── apps.post.ts β”‚ β”‚ └── repos/ | | β”œβ”€β”€ hooks.post.ts | | └── index.post.ts | β”œβ”€β”€ middleware/ | | └── auth.ts | β”œβ”€β”€ plugins/ | | └── init.ts β”‚ └── utils/ | └── execCommand.ts β”œβ”€β”€ nuxt.config.ts └── package.json ``` O objetivo deste artigo nΓ£o Γ© detalhar a implementação de cada endpoint, middleware ou plugin, mas sim apresentar a ideia geral e algumas soluçáes-chave da implementação. Queremos provocar o desenvolvedor que lΓͺ a complementar o projeto com suas prΓ³prias ideias. Abordaremos aqui apenas os trechos que considerei mais interessantes e relevantes para especificar. ### Execução de Comandos Shell Um componente crucial da implementação Γ© a função `execShellCommand`, que permite a execução segura de comandos shell. Esta função foi implementada em `server/utils/execCommand.ts`: ```typescript import { exec } from 'child_process' export default function execShellCommand(cmd: string) { return new Promise((resolve, reject) => { child_process.exec(cmd, (error, stdout, stderr) => { if (error) reject(stderr) else resolve(stdout) }) }) } ``` ### Implementação dos Endpoints Vamos examinar a implementação do endpoint para adicionar aplicativos ao PM2, localizado em `server/api/apps.post.ts`: ```typescript import execShellCommand from '~/server/utils/execCommand' export default defineEventHandler(async (event: any) => { console.log('[POST] /api/pm2/apps') const body = await readBody(event) if (!body || !body.appName || !body.appScript || !body.appPath) { setResponseStatus(event, 400) return { success: false, error: 'parametros obrigatΓ³rios.' } } try { // 1. Construir o comando do PM2 let pm2Command = `pm2 start ${body.appScript} --name ${body.appName}` if (body.appPath) pm2Command += ` --cwd ${body.appPath}` // 2. Executar o comando do PM2 await execShellCommand(pm2Command) return { success: true, message: `Aplicativo '${body.appName}' adicionado ao PM2 com sucesso!` } } catch (error: any) { console.log(error.message) setResponseStatus(event, 500) return { success: false, error: 'Erro ao adicionar o aplicativo ao PM2.' } } }) ``` Neste exemplo, podemos observar como o H3 simplifica o tratamento de requisiçáes e respostas atravΓ©s do `defineEventHandler`. A função `readBody` Γ© utilizada para extrair e validar os dados da requisição de forma assΓ­ncrona. ### Configuração do Nginx O endpoint para criar e ativar configuraçáes do Nginx demonstra como lidar com operaçáes de sistema de arquivos e execução de comandos shell em sequΓͺncia: ```typescript import * as fs from 'fs' export default defineEventHandler(async (event: any) => { console.log('[POST] /api/nginx/sites') const body = await readBody(event) if (!body || !body.siteName || !body.siteConfig) { setResponseStatus(event, 400) return { success: false, error: 'parametros obrigatΓ³rios.' } } const availableSitesPath = '/etc/nginx/sites-available' const newSiteFilePath = `${availableSitesPath}/${body.siteName}.conf` try { // 1. Verificar se o site jΓ‘ existe const siteExists = await fs.promises.access(newSiteFilePath, fs.constants.F_OK) .then(() => true) .catch(() => false) if (siteExists) { setResponseStatus(event, 409) return { success: false, error: `JΓ‘ existe uma configuração para o site '${body.siteName}'.` } } // 2. Escrever a configuração do site no arquivo await fs.promises.writeFile(newSiteFilePath, body.siteConfig) return { success: true, message: `Configuração do site '${body.siteName}' criada com sucesso!` } } catch (error: any) { console.log(error.message) setResponseStatus(event, 500) return { error: 'Erro ao criar a configuração do site.' } } }) ``` Este endpoint demonstra como o Nuxt 3 e H3 permitem uma integração suave entre operaçáes de sistema de arquivos assΓ­ncronas e execução de comandos shell, tudo dentro de um ΓΊnico handler de evento. ## Consideraçáes de SeguranΓ§a Aprofundadas ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eldnq9tp391fotfipvip.jpg) Ao desenvolver uma API com tal nΓ­vel de controle sobre o servidor, a seguranΓ§a se torna uma preocupação primordial. Vamos explorar em detalhes algumas medidas de seguranΓ§a essenciais: 1. **Autenticação e Autorização Robustas**: - Implementar um sistema de autenticação JWT (JSON Web Tokens) para todas as rotas da API. - Utilizar middleware de autorização para verificar permissΓ΅es especΓ­ficas para cada endpoint. - Considerar a implementação de um sistema de roles (funçáes) para um controle de acesso mais granular. 2. **Validação Rigorosa de Entrada**: - Utilizar bibliotecas como `zod` ou `joi` para validação de esquema dos dados de entrada. - Sanitizar todas as entradas para prevenir ataques de injeção de comando e XSS. - Implementar limitação de taxa (rate limiting) para prevenir ataques de forΓ§a bruta. 3. **PrincΓ­pio do PrivilΓ©gio MΓ­nimo**: - Criar um usuΓ‘rio dedicado no sistema operacional com permissΓ΅es estritamente necessΓ‘rias. - Utilizar `sudo` com comandos especΓ­ficos ao invΓ©s de dar acesso root completo. - Implementar um sistema de lista branca para comandos permitidos. 4. **Monitoramento e Auditoria**: - Implementar logging detalhado de todas as açáes realizadas pela API. - Utilizar um serviΓ§o de monitoramento como Datadog ou New Relic para alertas em tempo real. - Realizar auditorias regulares dos logs e das configuraçáes de seguranΓ§a. 5. **HTTPS e SeguranΓ§a da Rede**: - Garantir que toda a comunicação com a API seja feita atravΓ©s de HTTPS. - Implementar CORS (Cross-Origin Resource Sharing) de forma restritiva. - Considerar o uso de uma VPN para acesso Γ  API em ambientes de produção. 6. **Gerenciamento Seguro de Segredos**: - Utilizar variΓ‘veis de ambiente ou um serviΓ§o de gerenciamento de segredos como AWS Secrets Manager ou HashiCorp Vault. - Nunca armazenar senhas ou chaves diretamente no cΓ³digo ou em arquivos de configuração versionados. 7. **Atualizaçáes e Patches**: - Manter todos os pacotes e dependΓͺncias atualizados regularmente. - Implementar um processo de CI/CD que inclua verificaçáes de seguranΓ§a automΓ‘ticas. ## ConclusΓ£o e ReflexΓ΅es Finais A implementação desta API central de automação utilizando Nuxt 3 e H3 transformou significativamente meu fluxo de trabalho de implantação de servidores Node.js. Tarefas que antes exigiam acesso SSH manual e execução de mΓΊltiplos comandos agora podem ser realizadas com uma simples chamada de API, reduzindo drasticamente o tempo de configuração e minimizando erros humanos. A escolha do Nuxt 3 como framework para esta solução provou-se acertada, oferecendo um equilΓ­brio ideal entre performance, facilidade de desenvolvimento e flexibilidade. A integração nativa com o H3 para rotas de API proporcionou uma base sΓ³lida e eficiente para a construção dos endpoints necessΓ‘rios. No entanto, Γ© crucial ressaltar que uma API com esse nΓ­vel de controle sobre o servidor representa tanto uma poderosa ferramenta quanto uma significativa responsabilidade. A implementação de medidas de seguranΓ§a robustas nΓ£o Γ© apenas recomendada, mas absolutamente essencial. Cada endpoint deve ser tratado como um potencial vetor de ataque, e a seguranΓ§a deve ser uma consideração primΓ‘ria em cada estΓ‘gio do desenvolvimento e operação da API. Olhando para o futuro, vejo vΓ‘rias possibilidades de expansΓ£o e melhoria desta solução: 1. **Integração com Sistemas de Orquestração**: Considerar a integração com ferramentas como Kubernetes ou Docker Swarm para gerenciamento de contΓͺineres em larga escala. 2. **Implementação de Webhooks**: Adicionar suporte a webhooks para notificar sistemas externos sobre eventos importantes, como a criação bem-sucedida de um novo servidor. 3. **Interface de UsuΓ‘rio**: Desenvolver uma interface de usuΓ‘rio amigΓ‘vel utilizando Vue.js para complementar a API, facilitando ainda mais o gerenciamento de servidores. 4. **ExpansΓ£o para Outros ServiΓ§os**: Estender a funcionalidade para abranger outros serviΓ§os alΓ©m do Node.js, como bancos de dados ou servidores de cache. Em conclusΓ£o, esta solução nΓ£o apenas otimizou meu processo de trabalho, mas tambΓ©m abriu novas possibilidades para a automação e gestΓ£o de infraestrutura. Com as devidas precauçáes de seguranΓ§a, acredito que abordagens similares podem beneficiar significativamente equipes de desenvolvimento e operaçáes, promovendo uma cultura de DevOps mais eficiente e Γ‘gil.
wgbn
1,910,859
A step-by-step walkthrough on how to host a Static Website on Microsoft Azure Blob Storage
In Azure Storage are designed to store various types of data, including structured, semi structured...
0
2024-07-03T23:03:08
https://dev.to/ikay/a-step-by-step-walkthrough-on-how-to-host-a-static-website-on-microsoft-azure-blob-storage-m8c
blobstorage, azure, staticwebsite, visualstudiocode
In Azure Storage are designed to store various types of data, including structured, semi structured and unstructured data. Blob Storage is specifically optimized for storing large amounts of unstructured data, like text or binary files. Azure allows you to upload your static website easily, share its link anywhere, and make it accessible to anyone. Hosting your static website on Azure is free, but you'll need to pay for the storage account. Azure ensures data security using encryption techniques. This article provides step-by-step instructions with screenshots to help you host your static website successfully through **Azure and visual studio code**. Just follow these steps to get your static website up and running. **Steps – Hosting a static website with Azure Storage** **STEP 1** Open your Google drive link then click on mywebsite to download the file to your PC. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkvvrcgbo82vvh76djb2.png) **STEP 2** Click on your **Download tab** and locate the **mywebsite-20240629T172754Z-001** file to extract the file. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qebwx5hqqsemweo8kf7.png) Install **Visual Studio Code** on your desktop. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ob31ytcrkh8vvduf7ec.png) **STEP 3** Open your File in the visual studio code then Click on file at the top -left corner of your **Visual Studio Code** Click on **Open file** and select the folder that houses your static website codes and data. Click on your file drop-down and click Index.html ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxbj15mhil7u5uk00ms2.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35iu2t58duhdi4nt3exy.png) **STEP 4** A. Change only white text on command line 7 and 47 to your name B. Change orange text on command line 59 from index.html to about.html to test run and see when it’s deployed and not part of personal information to change. C. Edit command line 92, 93 and 109 to reflect your name and other details. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zcg59g0p4qlyi69mq979.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j8fmczc09c7ce0v2bz1.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3t0jrsif9lr0mnl0qic1.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hhfdhkds1e5ug5yaa18.png) D. Click on your file drop-down and click 404.html as the error page. E. Change only white text on command line 7 and 47 to your name. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zobmf0l6xyt8esq7os9.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0qqhif6p0g6i6mv37gs.png) To access Azure, you need to log in to your Azure account using your credentials. Search and Navigate to **Storage Accounts** and then click on **Create** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e64urzfbos4089x5ll5p.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdw2uwpccaxldc5wlu9l.png) **To set up a Storage Account** 1. Go to the **Basics tab**, under **Project details** and ensure the correct subscription is chosen. From the resource group list, select **azurestorgaeRG** or create a new one if needed. 2. Enter a unique name for the Storage account. 3. Choose a region for its location. 4. Select **Standard** for performance. 5. Choose **Geo Redundant Storage (GRS)** for redundancy. 6. Proceed to the next step or adjust other settings if necessary; otherwise, keep them as default. 7. Click on **Review + Create** to finalize the setup. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mslyas6zs79nung7lst.png) **Validation** Before proceeding, validate the configuration of the storage account to ensure everything is set up correctly. If the configuration is correct, the option to create the storage account will be available. If not, review all configurations thoroughly before clicking on the Create button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kku7dxto5j68gp2qzqem.png) **Go to Resources** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7160yqqmwqoxai2hspa9.png) **STEP 5** To find the Static Website settings: 1. Click on Data Management dropdown. 2. Select Static Website ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/or0b1il9wbdo11q7kpb4.png) **To set up the Static Website** 1. Enable the Static Website feature. 2. Enter the name of the index document. 3. Enter the name of the error document. Afterward, Azure generates two links: Primary and Secondary endpoints. Copy the Primary endpoint link. Azure also creates a storage container named $WEB to host the static website; click on it. After that two option appears on the screen (Disable and Enable), Click on the Enable. 4. Click on Save icon for your changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/insul1hwrfrvgo0nmh3y.png) **Go to Containers** 1. Open the Data storage dropdown. 2. Select Containers 3. Click on $WEB ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mbln3fzpbeo5rouvkp4.png) **To upload files from your PC** 1. Click on Upload 2. Navigate to the folder on your computer where the website files are located. 3. Select all then Drag and drop the files from this location into the designated box. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yomcfxxqrqljp598z3uw.png) **To test run on a browser** When your file is successfully uploaded then paste your primary link on your browser. At this point your Static Website is ready. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fn3bhqfhw2o2k0jswcyl.png) **STEP 6** **Connecting your static website to your Azure Account through Visual Studio Code** Click on the **Azure extension** to install your Azure account and Azure Storage ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa1wedc3ulzz86euxb1j.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2iwqxy28i1ldw8pe0sf.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea1iuctzp55aq76qg38l.png) Click on explorer then drag the path and right click to deploy to static website via Azure storage to sign in to Azure account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g85196w1lnm1pw5781c.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50tay6tyynbsiwh3b9oh.png)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8m8f43xzwhivyvaptuo.png) Click on the **Resources** drop- down and then click on the **Azure subscription** drop- down ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wno76qj05ielw1i0i835.png) Next click on the **storage account** drop-down ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jn33d5o3udf4p96t9s99.png) Click on **Deploy to the static website via Azure storage** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tdw77lakj2kjnizcehm.png) You will be instructed to select your preferred folder to deploy the static website. After the deployment is completed, ensure to click on browser static website. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/op3djsb7xc0g2z1kasem.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0b5ypwc5bew3qtntwgp.png) At this point your Static Website is ready that was done through visual studio code ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7a8fkukiog570wakjzv4.png) Thank you for sticking with this post and going through the steps, I hope you were able to use this guide to setup a static website on Azure Storage.
ikay
1,910,875
Best Practices for Writing API Documentation
Even the most feature-rich interface can be a hassle to use without proper user guidance. In today's...
0
2024-07-03T22:58:27
https://dev.to/irhose/best-practices-for-writing-api-documentation-gc9
bestpractices, api, productivity, devops
Even the most feature-rich interface can be a hassle to use without proper user guidance. In today's API-driven ecosystem, well-crafted documentation is a crucial step to success. Developing and maintaining API docs presents its own challenges. Striking a balance between granular detail and user-friendly writing is key to ensuring your documentation is both informative and engaging. This is precisely where the selection of appropriate tools becomes apparent. These tools not only streamline the process of managing your API documentation, but also guarantee its continued relevance and effectiveness, directly impacting the adoption and ongoing maintenance of your APIs. My favourite API documentation tool is [APIToolkit](https://apitoolkit.io?utm_source=dev.to&utm_medium=referral&utm_campaign=TCA) because it generates docs for you while ensuring you can spot broken endpoints in real time. Here's a roadmap to crafting API documentation that delights both developers and decision-makers: ## 1. Embrace Transparency: Document Every Request and Response Exhaustive detail is your friend in API documentation. Users won't be overwhelmed by how comprehensive it is; they'll appreciate the clarity. Remember, new users often require a helping hand until they're used to it. Don't shy away from seemingly basic details, like signup instructions. After all, even the most intuitive processes can benefit from clear explanation. ## 2. Empower Users with Comprehensive Resources Go beyond the core text and equip your users with an arsenal of resources to expedite their API mastery. The ultimate goal is to empower them to achieve success swiftly. Consider incorporating these enhancements: - Additional Data: Provide supplemental information that sheds light on specific API functionalities. - External Tools: Integrate links to relevant tools that can streamline development workflows. - Code Examples: Offer illustrative code snippets to demonstrate API usage in various programming languages. - Tutorials: Develop step-by-step guides that walk users through common use cases. ## 3. SDKs and Libraries Developers can quickly call many resources thanks to code libraries. Developers will feel more at ease using your API if there are quick and simple ways to use it in several languages. SDKs are challenging to create and are not necessary for launch, but they can significantly increase API use. Having SDKs is a fantastic method to interact with the developer community if your business strategy is based on a public or open API paradigm. In such a case, there is a significant probability that if developers see value in your SDKs and APIs, they will build upon it or add more libraries. The Swagger Codegen project enables teams to quickly create SDKs from their API documentation. ## 4. Interactive Console Encourage potential customers to check what they read in the API documentation using the API console right away. A console makes getting started quick and easy, with no risk to the consumer. Experimentation is powerful. The work required to build a console or sandbox environment for users to interact with your API is rather low, but it can greatly aid engineers in understanding the value of your API graphically. Many organizations, including Microsoft and GitHub, provide interactive consoles for experimenting with their API services. ## Final Thoughts These best practices will guide your API doc creation process. Whether you're getting help from a tool like [APIToolkit](https://apitoolkit.io?utm_source=dev.to&utm_medium=referral&utm_campaign=TCA) or you're developing your docs from scratch, knowledge of these best practices will ensure you create helpful docs for your audience.
irhose
1,910,874
Seeking Guidance: Full-Stack Developer vs. Software Engineer in the Age of AI
As the latest computer science graduate, I find myself at a crossroads. The rapid advancements in...
0
2024-07-03T22:57:56
https://dev.to/ryoichihomma/full-stack-developer-vs-software-engineer-which-roles-faces-greater-impact-from-ai-232e
softwareengineering, fullstackdevelopment, developers, ai
As the latest computer science graduate, I find myself at a crossroads. The rapid advancements in artificial intelligence (AI) are reshaping the tech industry, and I am concerned about the potential impact on my career. I am currently contemplating whether to pursue a role as a full-stack developer or a software engineer. ## My Background: I'm a recent CS graduate from Thompson Rivers University in BC, Canada. I have a strong passion for full-stack development. My goal in the future is to work at [Driveline Baseball](https://www.drivelinebaseball.com/), where not only baseball coaches but also developers, software engineers, data analysts, and data scientists analyze the amount of data in baseball to help baseball players from technological, biological, and scientific approaches. ## The Dilemma: With AI becoming increasingly capable, I am worried about the longevity and relevance of different tech roles. I want to choose a career path that not only aligns with my interests but also ensures that I won't be easily replaced by AI in the future. ## Seeking Your Insights: If you were in my shoes as a recent CS graduate, which role would you choose and why? Do you believe a full-stack developer or a software engineer role offers better prospects in the AI-driven future? Your professional insights and experiences would be invaluable in helping me make an informed decision. ### Key Points for Discussion: - The impact of AI on full-stack development vs. software engineering. - The future demand for full-stack developers and software engineers. - Skills and areas to focus on to remain relevant in the tech industry. - Personal experiences and advice for a new graduate navigating this landscape. I greatly appreciate your time and look forward to hearing your thoughts and advice. Your feedback will not only help me but also others in the community who might be facing similar dilemmas. Thank you!
ryoichihomma
1,910,857
[Game of Purpose] Day 46 - Simple Battery System
Today I implemented a simple battery system. It has a status indicator in the top right corner. It...
27,434
2024-07-03T22:06:20
https://dev.to/humberd/game-of-purpose-day-46-simple-battery-system-4b36
gamedev
Today I implemented a simple battery system. It has a status indicator in the top right corner. It has a configurable rate of depletion and automatically turns the engine off when it runs out of power. ![A 3D rendered scene showing a drone on a grid-patterned platform, with a clear sky and simple geometric structures in the background. The interface indicates "Engine Off" and displays a 0% status bar.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m97fqv82toma4ni8smx3.png) {% embed https://youtu.be/g2SHBqb4fb0 %} First I create a `BPC_Drone_Battery` component. It has an event `DrainTick` called every `TickEvery` amount of time and calls a `Drain Charge` function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2hesj3ur7wimjs9x14r.png) `Drain Charge` function simply takes `Drain Per Tick` away from a total charge and calls `Update Charge`. It does nothing when the battery is off. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nos8zhd67i9beqij1sm5.png) `Update Charge` updates the `Charge` variable and calls events: `Battery Charge Changed` or when it drops to zero `Battery Charge Depleted`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3dnli4es1rv91vezxn0.png)
humberd
1,910,854
ScrapeAnyWebsite - Exploratory Test Report
Introduction In my role as a software testing intern at HNG TECH LIMITED, I recently got...
0
2024-07-03T22:55:31
https://dev.to/therayo/scrapeanywebsite-test-report-3bfn
testing, softwaretesting, development, python
## Introduction In my role as a software testing intern at HNG TECH LIMITED, I recently got the chance to assess and try out ScrapeAnyWebsite, a Python tool created for simplifying web scraping tasks.Β  This post summarizes my discoveries, points out a major issue, and proposes enhancements to improve the application's performance. My aim is to offer useful insights that can benefit both ScrapeAnyWebsite users, testers, and developers. ###Summary of ScrapeAnyWebsite ScrapeAnyWebsite is software for desktop computers that enables you to effortlessly extract data from any website. You have the option to input either a single url or multiple urls, and then quickly extract data from those websites to your device. The tool abides by all limitations on downloading from websites as determined by the website owner. **Key Features** 1. **Easy-to-use interface**: Streamlines web scraping complexities, enabling users to concentrate on extracting data. 2. **Flexibility**: Ability to accommodate multiple URLs. 3. **Integration**: Seamlessly gives speedy results, improving its usefulness for data analysis. 4. **Headless mode**: The user has the option to operate in headless mode or not. **Testing Process** My testing process involved the following steps: 1. Installation and configuration of ScrapeAnyWebsite for testing on my Mac via the Parallels Desktop application on the AppStore. Note: ScrapeAnyWebsite is not available on the App Store. 2. Assessing the application features. 3. I then tried to understand the main aim of the application and how to carry out my test plan. 4. Evaluating the desktop tool's efficiency and stability in different scenarios.Β  5. Assess the performance and robustness of the tool under various conditions, such as OS. **Test Report** While testing, so many inconsistencies, functional, performance, and usability issues were encountered, which I have noted down here for the development team to work on in order to provide a great user experience for our users. Below are the reports of the tests carried out: 1. A user is not able to add a scrape name since the user cannot see what is being typed. This can lead to the error of typing the wrong name. 2. The user cannot save a scrape name because there's nothing showing in the "enter scrape name" field. 3. The user was unable to scrape a website even after adding the URL. 4. The user was able to scrape a website. 5. The user is unable to edit the details via the edit button. If the user is not allowed to edit input details, the functionality shouldn't be there. Please see the screenshot. 6. The "stop scraping" button takes time to stop. It lingered for about 10 seconds even after several clicks before stopping. See screenshot. 7. The user was able to edit the scrape job name successfully. 8. No new URL was added after toggling on the "Discover new URLs" button for several minutes. If no new URLs were found, they should be communicated. 9. The user was able to successfully delete a scraped job name at the first attempt. 10. The "stop scraping" function did not respond at all. It got stuck for about three minutes; even after exiting the said scrape job name, clicking, and trying to stop it, nothing changed. See screenshots. 11. While scrolling down over 2,000 URLs, it jumps back to the top, leaving the user clueless and incapable of scrolling to a desired URL. **Information for Installation:** Below is the information you might need for a successful installation: 1. OS: Windows 10 version 17763.0 or higher. 2. Installed version: 1.1.19.0 3. Link to the website: [Scrape Any Website](https://www.microsoft.com/store/productId/9MZXN37VW0S2?ocid=pdpshare) 4. Link to the download page: [Scrape Any Website](https://apps.apple.com/ng/app/parallels-desktop/id1085114709?mt=12) 5. Bug Report: [Bug Report](https://docs.google.com/spreadsheets/d/1-EgKcURXeOPewav6cElD7p3DKtXAbbd4Js8z1UABYuM/edit?gid=0#gid=0) **Summary** Trying out ScrapeAnyWebsite has proven to be a beneficial learning opportunity. The tool shows potential for web scraping, but enhancements are required for efficient handling and a great user experience. By tackling these problems and improving the tool's features, ScrapeAnyWebsite can evolve into a more robust tool for extracting data. **Call to Action** I urge users and developers to take part in enhancing ScrapeAnyWebsite's growth. Provide feedback, propose improvements, and engage in the community to assist in enhancing this tool. Collaboratively, we can enhance the accessibility and effectiveness of web scraping for all individuals. **Stay Tuned** Follow my blog for additional information and updates on software testing and tech products. We will look into ethical web scraping practices in upcoming posts and delve into advanced data extraction methods. Enjoy scraping! Happy Testing and Improving!πŸ˜ŽπŸ‘ Motunrayo Joseph | Product Tribe.
therayo
1,910,139
GraphQL and REST API
We will introduct GraphQL in this article and compare it with REST API. Have you ever worked on...
0
2024-07-03T22:53:25
https://blog.logto.io/graphql-and-rest-api/
webdev, api, coding, opensource
We will introduct GraphQL in this article and compare it with REST API. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00qok01xpsw7bkb9e398.png) Have you ever worked on Web app development project before? If so, then you may have already encountered the term "GraphQL". But what exactly does it mean? Is it used in server-side or client-side configurations? Additionally, when is GraphQL integration preferable to other alternatives? In this article, we will guide you through these questions. As a communication channel for data transfer between software components over the internet, [APIs](https://en.wikipedia.org/wiki/API) play an indispensable role in modern web service architectures, acting like oxygen. API technologies such as SOAP (a web services messaging protocol), REST (architectural style), and GraphQL (a query language and tool) facilitate software development by supporting integration and data exchange between different services, making development more convenient and flexible. Despite the numerous types of APIs, debates in recent years have predominantly centered on two main paradigms: REST (representational state transfer) and GraphQL. Both offer a range of advantages and are used globally in web projects. However, they differ significantly in how they manage data communication. # What is REST API? REST is a structured architecture style developed in the early 20th century for building cross-network hypermedia applications, intended to adopt a stateless client/server cacheable communication protocol. REST API (also known as RESTful API) is the driver of the REST architecture. REST API uses Uniform Resource Identifiers (URI) to address resources. REST API works by using different endpoints to perform CRUD (create, read, update, and delete) operations on network resources. They rely on predefined data formats (known as media types or MIME types) to determine the form and size of the resources they provide to clients. The most common formats are JSON and XML (sometimes HTML or plain text). After the client requests a resource, the server processes the query and returns all data related to that resource. The response contains HTTP response codes such as "200 OK" (for a successful REST request) and "404 Not Found" (for a non-existent resource). # What is GraphQL? GraphQL is both an API type (or query language) and a runtime engine for responding to those queries. It offers a simplified API and is particularly suited for mobile applications and the implementation of complex architectures that require specific data subsets. With GraphQL, developers can specify the data they want to retrieve from the API. It also allows for on-demand data retrieval instead of fetching everything at once, applies changes immediately, and integrates third-party data sources into the application. Applications can use GraphQL queries to call GraphQL services. These queries return the exact data elements requested by the client. This saves multiple API calls, network bandwidth, and post-processing. It is a highly efficient solution for data-centric APIs that are located close to consumer devices such as tablets and mobile phones. # How does GraphQL work? Each GraphQL service defines a set of types to describe the possible data you can query in that service (i.e., schema). Then, when queries and mutations (the types of operations supported by GraphQL) arrive, they are validated and executed based on that schema and processed using resolvers. Therefore, a complete GraphQL implementation must have two parts: schema and resolvers. ### GraphQL operations The operations supported by GraphQL include: - Query: Used to READ data - Mutation: Used to WRITE data, including operations to add, modify, and delete data - Subscription: Used to request a persistent connection for data, allowing the client to declare the types of events it is interested in and the data that should be returned Queries and mutations rely on on-demand HTTP connections; subscriptions rely on long-lived connections such as WebSocket. ### GraphQL schema Schemas define the types of data and their fields that can be retrieved through GraphQL queries: ``` type OrganizationRole { id: ID! scopes: [String] } type Organization { id: ID! name: String! roles: [OrganizationRole] } type Role { id: ID! name: String! description: String scopes: [String] } type User { id: ID! username: String primaryEmail: String roles: [Role] organizations: [Organization] } type Query { users: [User] } ``` Let's take Logto as an example. When validating permissions on developers business APIs, we often need to know the Role and Organization corresponding to the current user (as well as the related OrganizationRole) in order to understand what kind of permissions the user has and to decide whether to allow the user to execute the corresponding operation. ### GraphQL resolver A resolver is a function that can retrieve the data for some fields in GraphQL queries. At runtime, the GraphQL query engine will compare each query against the schema to verify if the query is valid. If the query is valid, the query engine will run the resolver for each field to fetch the data for that field. # Differences between GraphQL and REST APIs GraphQL and REST APIs are tools for exchanging data between different web services, but due to their different approaches to solving problems, there are many differences between them. ### Flexibility The implementation of REST API is with finer granularity, and GraphQL can obtain various forms of data through a single API, whereas each individual REST API may only fetch a very limited portion of the data. Referring to the previous example, we can get all relevant data including user, role, and organization in one go through a single GraphQL API. However, if using REST API, in most cases, we may need to first call the `GET /users/:id` API to get the user data, use `GET /users/:userId/roles` to get the role information related to the user, and use `GET /users/:userId/organizations` to get the organization data related of the user. This is not the worst case. After obtaining the data related to the user's role, we may also need to request the `GET /roles/:roleId` API to get the details related to the role. These APIs all depend on the return results of the previous API calls, which in some degree may limit the system's performance. On the other hand, the design of the REST API can limit the function of each API to a very small scope, making it more convenient to call and maintain in engineering practice. While GraphQL provides great flexibility, it also brings some additional workload for usage and maintenance. ### Data retrieval When using GraphQL to fetch data, we can get appropriate data through writing suitable queries without the issues of under- and over-fetching. However, due to the general design of REST API, we often need to call multiple different REST APIs to get exactly the data we need, as we exemplified in the previous section. The fetched data may also have an over-fetching issue since we cannot set precise filtering conditions in the API, which requires post-processing of the data fetched by the REST API. ### Caching The REST API has a mature ecosystem, and developers can also use various caching technologies such as server-side or network layer (like CDN) to help improve API performance. However, due to its flexibility, many of the caching technologies available for the REST API are not applicable to GraphQL. It is necessary to handle caching on the client-side, based on specific use cases, which brings extra workload to client development. There are many other differences between GraphQL and REST API as well. For instance, GraphQL cannot upload files, but the REST API can. Due to the limited space in this article, we can't list all of them, but with the development of time, these limitations might be resolved in the near future. # Which one should I use? GraphQL or REST APIs? There are some situations where GraphQL is a better practice compared to REST API: - When client-side requirements change frequently In this case, using GraphQL can save a lot of development time on the server side, otherwise, it would take a lot of time to repeatedly modify the API implementation. - When the client side is sensitive to network latency In such cases, multiple network requests and dependencies between requests can lead to a very poor user experience on the client side, and using fewer requests to achieve the same data acquisition goals becomes necessary. GraphQL is a better choice compared to REST API in this situation. Of course, if there are enough resources, more task-specific REST APIs can also be developed for such use cases. # Conclusion Although GraphQL has gained strong momentum as a newcomer in recent years, REST API still holds significant importance for a long time due to its atomic design and rigor. By using REST API, the form of input and output for a single API can be strictly restricted, with many special limitations added in the implementation of the API. It is also easy to learn and get started. With comprehensive API documentation, users can easily find the APIs they need to build systems that meet their requirements. {% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
palomino
1,910,873
Efficient User and Group Management on Linux: A Bash Script Tutorial
Table Of Content Introduction script overview Conclusion Introduction A Bash...
0
2024-07-03T22:44:14
https://dev.to/linsmed/efficient-user-and-group-management-on-linux-a-bash-script-tutorial-4k16
linux, bash, devops, sysop
## Table Of Content 1. Introduction 2. script overview 3. Conclusion ## Introduction A Bash script is a text file containing a series of commands written in the Bash (Bourne Again SHell) scripting language, which is a command processor that typically runs in a text window where the user types commands to perform actions. A Bash script allows users to automate repetitive tasks, manage system operations, and perform complex operations by executing commands. As a SysOps engineer, managing user accounts and groups is crucial for maintaining a secure and organized development environment. In this article, I will guide you through a bash script designed to automate user creation, assign groups, set up home directories, and handle permissions efficiently. The script, create_users.sh, performs the following tasks: 1. Reading Input: It reads a text file containing usernames and their associated groups. 2. User and Group Creation: It creates users and their groups if they do not already exist. 3. Home Directory Setup: It sets up home directories with appropriate permissions and ownership. 4. Password Generation: It generates random passwords for the users. 5. Logging: It logs all actions to /var/log/user_management.log and stores the passwords securely in /var/secure/user_passwords.csv. ### Script Overview Here’s a detailed breakdown of the script: 1. Checking Input Argument: The script checks if the input file is provided as an argument. ``` if [ -z "$1" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi ``` 2. Initialization: It initializes log and password files and ensures the secure directory exists. ``` # Function to generate a random password generate_password() { # using 'openssl rand -base64 12’ to generate a 12-character password openssl rand -base64 12 } # Read input file line by line while IFS=';' read -r username groups; do # Create groups if they don't exist for group in $(echo "$groups" | tr ',' ' '); do groupadd "$group" 2>/dev/null || echo "Group $group already exists" done ``` 3. ## create user ` useradd -m "$username" -G "$groups" 2>/dev/null || echo "User $username already exists"` 4. command that sets passwords ` password=$(generate_password) echo "$username:$password" | chpasswd` 5. Command that logs actions `echo "$(date '+%Y-%m-%d %H:%M:%S') - Created user $username with groups: $groups" >> "$log_file"` 6. Command that stores password securely ` echo "$username:$password" >> "$password_file" done < "$input_file"` ### Conclusion This script simplifies managing users and groups on a Linux system, ensuring security and efficiency. Automating these tasks not only saves time but also reduces the risk of human error. For more information on the HNG internship program and to learn how to become a world-class developer, visit [HNG Internship](https://hng.tech/internship) and [HNG Premium](https://hng.tech/premium). You can also find the code to the bash script on my GitHub here [Linsmed](https://github.com/Linsmed/hng_stage_1_task)
linsmed
1,910,829
Recreate Angular Logo with SVG
Introduction Using path elements, the first thing to create is the outer shield. The logo...
0
2024-07-03T22:35:47
https://dev.to/dindustack/recreate-angular-logo-with-svg-n66
webdev, javascript, beginners, programming
#**Introduction** Using path elements, the first thing to create is the outer shield. The logo shield is made of a lighter and darker portion, which creates an illusion of a fold. Let's create the shield. ```javascript <svg width="800" height="600"> <path fill="#dd002e" d="M 0 50 L 100 0 L 200 50 L 190 185 L 100 235 L 10 185 Z"></path> <path fill="#c4002b" d="M 100 0 L 200 50 L 190 185 L 100 235 Z"></path> </svg> ``` ![Angular logo shield](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7dvexd2j8bkj0vdsq0uy.png) ###**Explanation:** For the darker portion, there are seven instructions: **M** 0 50 **L** 100 0 **L** 200 50 **L** 190 185 **L** 100 235 **L** 10 185 **Z** The instructions work with coordinates, and **M** is the initiating origin, where the SVG starts from, **L** command is for lines that need to be drawn to a coordinate, and the **Z** command closes a path. ###**To draw letter A** ```javascript // <svg width="800" height="600"> // <path fill="#dd002e" d="M 0 50 L 100 0 L 200 50 L 190 185 L 100 235 L 10 185 Z"></path> // <path fill="#c4002b" d="M 100 0 L 200 50 L 190 185 L 100 235 Z"></path> <path fill="#fff" d="M 100 35 L 30 185 H 50 L 100 70 Z"></path> <path fill="#fff" d="M 100 35 L 170 185 L 150 185 L 100 70 Z"></path> <path fill="#fff" d="M 78 120 L 122 120 L 132 140 L 68 140 Z"></path> // svg ``` ###**Result** ![Angular logo with letter A](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jx2pee6ihrtgstxl8mzh.png) <iframe height="300" style="width: 100%;" scrolling="no" title="SVG Angular Logo" src="https://codepen.io/dindustack/embed/ZEdzEEE?default-tab=html%2Cresult" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href="https://codepen.io/dindustack/pen/ZEdzEEE"> SVG Angular Logo</a> by Chinwendu (<a href="https://codepen.io/dindustack">@dindustack</a>) on <a href="https://codepen.io">CodePen</a>. </iframe>
dindustack
1,910,872
Part 2 :Design Principles in Software Development
In software development, adhering to design principles is crucial for creating robust, maintainable,...
27,935
2024-07-03T22:35:26
https://dev.to/moh_moh701/part-1-design-principles-in-software-development-4mgp
architecture, dotnet
In software development, adhering to design principles is crucial for creating robust, maintainable, and scalable applications. Here, we'll explore five fundamental design principles: Dependency Inversion, Separation of Concerns, Single Responsibility, DRY (Don't Repeat Yourself), and Persistence Ignorance. Each principle helps to ensure that your codebase remains clean and efficient. #### 1. Dependency Inversion Principle (DIP) The Dependency Inversion Principle is the last of the SOLID principles. It states that high-level modules should not depend on low-level modules; both should depend on abstractions. Additionally, abstractions should not depend on details. Details should depend on abstractions. **Example:** ```csharp public interface IMessageService { void SendMessage(string message); } public class EmailService : IMessageService { public void SendMessage(string message) { // Send email } } public class Notification { private readonly IMessageService _messageService; public Notification(IMessageService messageService) { _messageService = messageService; } public void Notify(string message) { _messageService.SendMessage(message); } } ``` In this example, the `Notification` class depends on the abstraction `IMessageService` rather than a concrete implementation like `EmailService`. This allows for easy swapping of different message services without modifying the `Notification` class. #### 2. Separation of Concerns (SoC) Separation of Concerns involves organizing code into distinct sections, each responsible for a specific functionality. This principle helps to reduce code complexity and improve maintainability. **Example:** In a typical web application, you might separate concerns into layers such as: - Presentation Layer: Handles the UI and user interaction. - Business Logic Layer: Contains the core application logic. - Data Access Layer: Manages data retrieval and storage. ```csharp // Presentation Layer public class UserController { private readonly IUserService _userService; public UserController(IUserService userService) { _userService = userService; } public IActionResult GetUser(int id) { var user = _userService.GetUserById(id); return Ok(user); } } // Business Logic Layer public interface IUserService { User GetUserById(int id); } public class UserService : IUserService { private readonly IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public User GetUserById(int id) { return _userRepository.GetById(id); } } // Data Access Layer public interface IUserRepository { User GetById(int id); } public class UserRepository : IUserRepository { public User GetById(int id) { // Retrieve user from database } } ``` Each layer has a specific responsibility, making the codebase easier to manage and extend. #### 3. Single Responsibility Principle (SRP) The Single Responsibility Principle asserts that a class should have only one reason to change, meaning it should have only one job or responsibility. **Example:** ```csharp public class User { public string Name { get; set; } public string Email { get; set; } } public class UserRepository { public void Save(User user) { // Save user to database } } public class UserValidator { public bool Validate(User user) { // Validate user data } } ``` Here, `User`, `UserRepository`, and `UserValidator` each have a single responsibility, adhering to the SRP. #### 4. DRY (Don't Repeat Yourself) The DRY principle emphasizes reducing the repetition of code by abstracting out common functionality into reusable components. **Example:** ```csharp public class EmailService { public void SendEmail(string to, string subject, string body) { // Send email } } public class NotificationService { private readonly EmailService _emailService; public NotificationService(EmailService emailService) { _emailService = emailService; } public void NotifyUser(string email, string message) { _emailService.SendEmail(email, "Notification", message); } } ``` By using the `EmailService` in `NotificationService`, we avoid duplicating email sending logic. #### 5. Persistence Ignorance Persistence Ignorance means that the business logic of an application should not be aware of how data is persisted. This principle ensures that the core logic remains independent of the data access technology used. **Example:** ```csharp public class Product { public string Name { get; set; } public decimal Price { get; set; } } public interface IProductRepository { void Save(Product product); } public class ProductService { private readonly IProductRepository _productRepository; public ProductService(IProductRepository productRepository) { _productRepository = productRepository; } public void AddProduct(Product product) { _productRepository.Save(product); } } ``` The `ProductService` class does not know or care about how the `ProductRepository` class persists data, adhering to the principle of persistence ignorance. ### Conclusion Understanding and applying these design principles can significantly enhance the quality of your software. By following Dependency Inversion, Separation of Concerns, Single Responsibility, DRY, and Persistence Ignorance, you can create applications that are easier to understand, maintain, and extend.
moh_moh701
1,907,803
40 Days Of Kubernetes (9/40)
Day 9/40 Kubernetes Services Explained - ClusterIP vs NodePort vs Loadbalancer vs...
0
2024-07-03T20:27:32
https://dev.to/sina14/40-days-of-kubernetes-940-5863
kubernetes, 40daysofkubernetes
## Day 9/40 # Kubernetes Services Explained - ClusterIP vs NodePort vs Loadbalancer vs External [Video Link](https://www.youtube.com/watch?v=tHAQWLKMTB0) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) We're going to look at `service` in `kubernetes` and its types as below list: 1. ClusterIP 2. NodePort 3. LoadBalancer 4. External Name --- The goal is to expose our front-end app to users, we use `service` and its benefits. As you can see in the below diagram, we are using `service` anywhere which it makes sure there is at least one `pod` serving and listening on specific `port` in our `cluster` and it's accessible only to what we specified. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1n7ixvzbu5fmgx3gniq.png) (Photo from the video) ### 1. Node Port ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zu4jmv64z4gygbrthmnm.png) (Photo from the video) These services are ideal for applications that need to be accessible from outside the `cluster`, such as web applications or APIs. With `NodePort` services, we can access our application using the node’s IP address and the port number assigned to the service. When we create a `NodePort` service, `Kubernetes` assigns a port number from a predefined range of 30000-32767.([source](https://www.baeldung.com/ops/kubernetes-service-types#2-nodeport-services)) There are 3 ports to define: 1. `nodePort` - for external users 2. `port` - for internal clients 3. `targetPort` - container port ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hpadf717mrxkvzya4t6.png) (Photo from the video) multi-pod scenario ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cp9rkvvz16dspgke8as.png) (Photo from the video) #### Implementing ```console root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION lucky-luke-control-plane Ready control-plane 4d v1.30.0 lucky-luke-worker Ready <none> 4d v1.30.0 lucky-luke-worker2 Ready <none> 4d v1.30.0 ``` ```yaml apiVersion: v1 kind: Service metadata: name: nodeport-svc labels: env: demo spec: selector: env: demo type: NodePort ports: - nodePort: 30001 port: 80 targetPort: 80 ``` ```console root@localhost:~# kubectl apply -f nodeport.yaml service/nodeport-svc created root@localhost:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d nodeport-svc NodePort 10.96.37.68 <none> 80:30001/TCP 15s root@localhost:~# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d <none> nodeport-svc NodePort 10.96.37.68 <none> 80:30001/TCP 23s env=demo ``` **Note** because we are using the `kind` for creating our `cluster`, we need to do extra step to expose our port outside the `kind`. [Mapping ports to the host machine](https://kind.sigs.k8s.io/docs/user/quick-start/#mapping-ports-to-the-host-machine) It needs recreating the cluster because of: "_It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing extraPortMappings (as well as other cluster settings) is only possible with recreation of the kind cluster._" [source](https://stackoverflow.com/a/68268591) So our `kind` cluster would be: ```yaml # three node (two workers) cluster config kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30001 hostPort: 30001 - role: worker - role: worker ``` - Delete the current cluster ```console root@localhost:~# kind delete cluster --name `kind get clusters` Deleting cluster "lucky-luke" ... Deleted nodes: ["lucky-luke-control-plane" "lucky-luke-worker2" "lucky-luke-worker"] ``` - Create the cluster again with new feature ```console root@localhost:~# kind create cluster --config kind-lucky-luke.yaml --name lucky-luke Creating cluster "lucky-luke" ... βœ“ Ensuring node image (kindest/node:v1.30.0) πŸ–Ό βœ“ Preparing nodes πŸ“¦ πŸ“¦ πŸ“¦ βœ“ Writing configuration πŸ“œ βœ“ Starting control-plane πŸ•ΉοΈ βœ“ Installing CNI πŸ”Œ βœ“ Installing StorageClass πŸ’Ύ βœ“ Joining worker nodes 🚜 Set kubectl context to "kind-lucky-luke" You can now use your cluster with: kubectl cluster-info --context kind-lucky-luke Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community πŸ™‚ ``` Agian with our `cluster`: ```console root@localhost:~# kind get clusters lucky-luke root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION lucky-luke-control-plane Ready control-plane 9m14s v1.30.0 lucky-luke-worker Ready <none> 8m47s v1.30.0 lucky-luke-worker2 Ready <none> 8m47s v1.30.0 ``` - Create deployment (in day 8/40): ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy labels: env: demo spec: template: metadata: name: nginx-pod labels: env: demo type: frontend spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 replicas: 3 selector: matchLabels: env: demo ``` ```console root@localhost:~# kubectl create -f day08-deploy.yaml deployment.apps/nginx-deploy created root@localhost:~# kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-deploy-6bfd44d944-dlfcf 1/1 Running 0 71s pod/nginx-deploy-6bfd44d944-rgmjn 1/1 Running 0 71s pod/nginx-deploy-6bfd44d944-tpvfz 1/1 Running 0 71s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deploy 3/3 3 3 71s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deploy-6bfd44d944 3 3 3 71s ``` - Create `service`: ```console root@localhost:~# kubectl apply -f nodeport.yaml service/nodeport-svc created root@localhost:~# kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-deploy-6bfd44d944-dlfcf 1/1 Running 0 2m33s pod/nginx-deploy-6bfd44d944-rgmjn 1/1 Running 0 2m33s pod/nginx-deploy-6bfd44d944-tpvfz 1/1 Running 0 2m33s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m service/nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 7s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deploy 3/3 3 3 2m33s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deploy-6bfd44d944 3 3 3 2m33s ``` - Check the service: ```console root@localhost:~# kubectl describe svc nodeport-svc Name: nodeport-svc Namespace: default Labels: env=demo Annotations: <none> Selector: env=demo Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.145.18 IPs: 10.96.145.18 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30001/TCP Endpoints: 10.244.1.4:80,10.244.2.3:80,10.244.2.4:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> ``` - Current state: ```console root@localhost:~# kubectl get pod,deploy,svc --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS pod/nginx-deploy-c95b4f658-gj6jl 1/1 Running 0 47s 10.244.2.4 lucky-luke-worker2 <none> <none> env=demo,pod-template-hash=c95b4f658,type=frontend pod/nginx-deploy-c95b4f658-nc2qg 1/1 Running 0 47s 10.244.1.4 lucky-luke-worker <none> <none> env=demo,pod-template-hash=c95b4f658,type=frontend pod/nginx-deploy-c95b4f658-nfgrn 1/1 Running 0 47s 10.244.2.3 lucky-luke-worker2 <none> <none> env=demo,pod-template-hash=c95b4f658,type=frontend NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS deployment.apps/nginx-deploy 3/3 3 3 47s nginx-container nginx env=demo env=demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m <none> component=apiserver,provider=kubernetes service/nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 6m5s env=demo env=demo ``` - Check the `service` from our local: ```console root@localhost:~# curl localhost:30001 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> ``` --- ### 2. Cluster IP Because every `pod` which is restarted has got another `IP` address, we need to create a `clusterIP` service type. It's default `service` type in `Kubernetes`. We only need `port` and `targetPort` in its yaml configuration file. `selector` section is related to `deployment` label. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqwrgo0mr1edfzqm36hh.png) (Photo from the video) #### Implementation ```yaml apiVersion: v1 kind: Service metadata: name: clusterip-svc labels: env: demo spec: selector: env: demo type: ClusterIP ports: - port: 80 targetPort: 80 ``` ```console root@localhost:~# kubectl apply -f clusterip.yaml service/clusterip-svc created root@localhost:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip-svc ClusterIP 10.96.242.10 <none> 80/TCP 6s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 67m nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 51m root@localhost:~# kubectl get svc --show-labels -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS clusterip-svc ClusterIP 10.96.242.10 <none> 80/TCP 23s env=demo env=demo kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 67m <none> component=apiserver,provider=kubernetes nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 51m env=demo env=demo root@localhost:~# kubectl describe svc clusterip-svc Name: clusterip-svc Namespace: default Labels: env=demo Annotations: <none> Selector: env=demo Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.242.10 IPs: 10.96.242.10 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.4:80,10.244.2.3:80,10.244.2.4:80 Session Affinity: None Events: <none> root@localhost:~# kubectl get endpoints NAME ENDPOINTS AGE clusterip-svc 10.244.1.4:80,10.244.2.3:80,10.244.2.4:80 3m25s kubernetes 172.19.0.2:6443 70m nodeport-svc 10.244.1.4:80,10.244.2.3:80,10.244.2.4:80 54m ``` --- ### 3. Load Balancer `LoadBalancer` services are ideal for applications that need to handle high traffic volumes, such as web applications or APIs. With `LoadBalancer` services, we can access our application using a single IP address assigned to the load balancer.[source](https://www.baeldung.com/ops/kubernetes-service-types#3-loadbalancer-services) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18krfnomfxnktom6szfe.png) (Photo from the video) #### Implementation ```yaml apiVersion: v1 kind: Service metadata: name: loadbalancer-svc labels: env: demo spec: selector: env: demo type: LoadBalancer ports: - port: 80 targetPort: 80 ``` ```console root@localhost:~# kubectl apply -f loadbalancer.yaml service/loadbalancer-svc created root@localhost:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip-svc ClusterIP 10.96.242.10 <none> 80/TCP 12m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 80m loadbalancer-svc LoadBalancer 10.96.153.185 <pending> 80:31076/TCP 7s nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 64m root@localhost:~# kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-deploy-c95b4f658-gj6jl 1/1 Running 0 59m pod/nginx-deploy-c95b4f658-nc2qg 1/1 Running 0 59m pod/nginx-deploy-c95b4f658-nfgrn 1/1 Running 0 59m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/clusterip-svc ClusterIP 10.96.242.10 <none> 80/TCP 13m service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 80m service/loadbalancer-svc LoadBalancer 10.96.153.185 <pending> 80:31076/TCP 15s service/nodeport-svc NodePort 10.96.145.18 <none> 80:30001/TCP 64m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deploy 3/3 3 3 59m NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deploy-c95b4f658 3 3 3 59m ``` As we can see, `EXTERNAL-IP` for our `LoadBalancer` service type is `<pending>`. This is what we can provide an IP Address or DNS name for our users, and yer we don't have provision them in the service and it behaves as a `NodePort` service. [For more info](https://kind.sigs.k8s.io/docs/user/loadbalancer/) --- ### 4. External Name We actually map it to a DNS. The below sample is from official documentation.[source](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) ```yaml apiVersion: v1 kind: Service metadata: name: my-service namespace: prod spec: type: ExternalName externalName: my.database.example.com ``` --- ### 5. Creating `service` in command-line Syntax: ` kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none] ` Example ``` kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80 ``` --- ### 6. Useful links - [Kubernetes Official](https://kubernetes.io/docs/reference/kubectl/quick-reference/) - [Kubectl Command Cheatsheet](https://www.bluematador.com/learn/kubectl-cheatsheet) - [Kubectl Reference Docs](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong-)
sina14
1,910,851
How to mock a hidden dependency
Working with legacy code is difficult. When working with legacy code, you can run into a number of...
0
2024-07-03T21:59:26
https://dev.to/clarenced/how-to-mock-a-hidden-dependency-17ih
refactoring, java, cleancoding
Working with legacy code is difficult. When working with legacy code, you can run into a number of challenges, like for instance : how to write a unit test for a method that contains a hidden, private dependency. Let me show you an example of such code : ```java public class NotificationService { private void sendSMSNotification(User user, Event event, boolean isUrgent) throws NotificationException { try { String messageContent = buildSMSMessageContent(user, event, isUrgent); String phoneNumber = user.getPhoneNumber(); if (phoneNumber == null || phoneNumber.isEmpty()) { throw new NotificationException("User's phone number is not available."); } // Get SmsService bean from ApplicationContext SmsService smsService = ApplicationContextHolder.getBean(SmsService.class);** boolean isSent = smsService.sendSMS(phoneNumber, messageContent); if (!isSent) { throw new NotificationException("Failed to send SMS to " + phoneNumber); } // Optionally log the SMS sending for auditing purposes logSMSSending(user, phoneNumber, messageContent, isUrgent); } catch (Exception e) { throw new NotificationException("Error occurred while sending SMS notification.", e); } } } ``` Here the hidden dependency is the `SmsService`. As you can see, it is instantiated with the Spring ApplicationContext. This is a common pattern we can β€œencounter” when working with a legacy code. The idea behind this `ApplicationContextHolder`is that it serves as a β€œutility” class that has a reference to the Spring applicationContext and instead of injecting the bean, or the service with `@Autowired` we are directly injecting by calling the static method `ApplicationContext.getBean` . This is problematic because SmsService is hidden, private and is making a real Api call to the the SmsProvider. In my test, I want to have the possibility to mock the SmsService. So, how to achieve that ? ## Extract and override getter There is a technique that Michael Feathers describes in his book `Working effectively with Legacy Code` to overcome this problem. It’s called `Extract and Override getter` . To expose the SmsService, define a getter, `getSmService` and use that getter in all places where the `SmService` is used in the class. This `getSmsService` visibility is protected. ```java public class NotificationService { private void sendSMSNotification(User user, Event event, boolean isUrgent) throws NotificationException { try { //same as before SmsService smsService = getSmsService(); boolean isSent = smsService.sendSMS(phoneNumber, messageContent); if (!isSent) { throw new NotificationException("Failed to send SMS to " + phoneNumber); } // same as before } protected SmsService getSmsService(){ return ApplicationContextHolder.getBean(SmsService.class); } } ``` 2nd step, create a `TestNotificationService` that will override the `getSmsService` and return a `FakeSmsService`. ```java class TestNotificationService extends NotificationService { @Override public SmsService getSmsService(){ return new FakeSmsService(); } } ``` For the sake of simplicity, let’s imagine that `SmsService` is an interface, otherwise you would need to extract an interface from the `SmsService` that will contain `sendSms` as a method. The FakeSmsService will return false for the `sendSms` method. ```java class FakeSmsService implements SmsService { @Override public boolean sendSms(phoneNumber, messageContent){ return false; } } ``` And then write your test. ```java @Test void test_raise_an_exception_when_sms_is_not_sent(){ NotificationService notificationService = new TestNotificationService(); Exception exception = assertThrows(NotificationException.class, () -> { notificationService.sendSMSNotification(user, event, false); } Assertions.assertEquals("Failed to send SMS to 0606060606", exception.getMessage()); } ``` ## To summarize 1. Create a getter to expose with protected visibility 2. Define a test class that extends the main and overrides the getter previously defined. 3. Use it your test.
clarenced
1,910,856
Hey
It's Dylan McMahon
0
2024-07-03T21:53:41
https://dev.to/dylanmcmahon/hey-58gg
It's Dylan McMahon
dylanmcmahon
1,910,855
Optimising User Account Management on Linux Using a BashΒ Script
As a SysOps engineer, you often find yourself managing users and groups on Linux systems. This task...
0
2024-07-03T21:47:00
https://dev.to/chiemerie_anaebo_60fb2815/optimising-user-account-management-on-linux-using-a-bash-script-2pc3
devops, linux, bash, ubuntu
As a SysOps engineer, you often find yourself managing users and groups on Linux systems. This task can be tedious, especially when dealing with a large number of users. Automation is key to reducing manual workload and minimizing errors. In this article, I'll walk you through a Bash script that automates the process of creating and managing users and groups on a Linux system. This script is particularly useful for onboarding new developers and ensuring consistent user setups. ## Why Automate User Management? Manual user and group management can be time-consuming and prone to errors. Automating this process not only saves time but also ensures consistency and security. By using a script, you can: - Quickly create multiple users and groups. - Ensure home directories are properly set up with correct permissions. - Generate and securely store random passwords. - Log all actions for auditing purposes. ## Prerequisites The following are the requirements for creating and executing the script. - Fundamental Understanding of Linux Commands - Administrative privilege - Text editors such as Vim, Nano, and TextEdit ## Script Overview The script reads a text file containing usernames and group names, creates the users and groups as specified, sets up home directories, and logs all actions. Here’s a breakdown of the script: **Log File and Password File Definitions:** These lines specify the locations for the log and password files. The log file will record the script's activity, while the password file will safely store the produced passwords. ```bash LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" ``` **Log Function:** This function logs messages to the log file with a timestamp. ```bash log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOGFILE } ``` **Password Generation Function:** This function generates a random password using OpenSSL. The password is 12 characters long and encoded in base64. ```bash generate_password() { openssl rand -base64 12 } ``` **Input File Check:** This block checks if the input file is provided as a command-line argument. If not, it displays a usage message and exits. ```bash if [ "$#" -ne 1 ]; then echo "Usage: $0 <input_file>" exit 1 fi ``` **Create Secure Directory:** This block creates the `/var/secure` directory if it doesn't exist and sets its permissions to be accessible only by the owner. ```bash mkdir -p /var/secure chmod 700 /var/secure ``` **Read Input File Line by Line:** This block reads the input file line by line. It ignores empty lines and lines starting with `#`. ```bash while IFS= read -r line || [ -n "$line" ]; do [[ -z "$line" || "$line" =~ ^# ]] && continue ``` **Parse Username and Groups:** This block splits each line into a username and groups using ; as the delimiter. It also trims any leading or trailing whitespace from the username. ```bash IFS=';' read -r username groups <<< "$line" username=$(echo "$username" | xargs) ``` **Check for Empty Username:** If the username is empty, the script skips to the next line. ```bash [ -z "$username" ] && continue ``` **Create Group:** This block checks if a group with the same name as the username exists. If not, it creates the group and logs the action. ```bash if ! getent group "$username" >/dev/null; then groupadd "$username" log "Created group $username" fi ``` **Create User:** This block checks if the user exists. If not, it creates the user with a home directory, assigns the user to their personal group, sets the password, stores the password, and logs the action. ```bash if ! id -u "$username" >/dev/null 2>&1; then password=$(generate_password) useradd -m -g "$username" "$username" echo "$username:$password" | chpasswd echo "$username:$password" >> $PASSWORD_FILE log "Created user $username with password" else log "User $username already exists" fi ``` **Assign User to Additional Groups:** This block splits the groups string into an array using , as the delimiter, trims whitespace from each group name, creates any additional groups if they don't exist, and assigns the user to these groups. ```bash IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) [ -z "$group" ] && continue if ! getent group "$group" >/dev/null; then groupadd "$group" log "Created group $group" fi usermod -aG "$group" "$username" log "Added $username to group $group" done ``` **Final Log Message:** This line logs the completion of the user and group creation process. ```bash log "User and group creation process completed." ``` ## Conclusion Automating user and group management on Linux systems can greatly improve efficiency and reduce errors. The script provides a robust solution for onboarding new developers and ensuring consistent user setups. Interested in honing your technical skills and working on real-world projects? Check out the [HNG Internship](https://hng.tech/internship) for opportunities to learn and grow. If you're looking to hire talented interns, explore the [HNG](https://hng.tech/hire) Hire page. By incorporating automation into your workflow, you can streamline repetitive tasks and focus on more strategic initiatives. Happy scripting!
chiemerie_anaebo_60fb2815
1,910,853
Let's build "meet" together
Hi proactive creators! I’m Vlad, a co-founder of "meet," a startup creating a platform for...
0
2024-07-03T21:46:10
https://dev.to/vladislav_khryapin_c617ce/lets-build-meet-together-hia
ai, productivity, mobile, interview
Hi proactive creators! I’m Vlad, a co-founder of "meet," a startup creating a platform for scheduling meetings. We are now searching for: ⁃ Chief Technology Officer (CTO) ⁃ Lead UX/UI Designer ⁃ Lead Graphic Designer Please fill the form in the presentation below or email us ([email protected]) if you are interested in joining the team or have any questions! About the project: "meet" reduces the costs of arranging team and 1:1 meetings to under 15 seconds (outperforming a messenger) by using AI and innovative user interface elements. We focus on absolute simplicity of the interface, flexibility of use, and the comprehensiveness of the solution. We are now developing an MVP and plan to launch the product in the first half of the fall. About the role: We are looking for a key team member, not an employee. So it is an unpaid long-term role β€” we need proactive creators driven to change the ordinary! Compensation is provided in the form of company equity options. Once we secure a funding round (~ fall), a salary can be discussed individually. To find more detailed information about the product, position, and team, click here![](https://drive.google.com/file/d/1O4NRoN8tepiwxhvif8vb_T3gzbxEcHGC/view)
vladislav_khryapin_c617ce
1,910,852
Automating User and Group Management with Bash Scripting
Introduction Managing user accounts and groups efficiently is crucial for system...
0
2024-07-03T21:39:04
https://dev.to/mukhtar_devops/automating-user-and-group-management-with-bash-scripting-4hlk
## **Introduction** Managing user accounts and groups efficiently is crucial for system administrators to maintain security and streamline operations in Linux environments. This article explores the implementation and functionality of a Bash script named "create_users.sh", designed to automate user and group management tasks. ## **Script Overview** The "create_users.sh" script is designed to read from an input file containing usernames and associated groups in a specific format (user;groups). It performs a series of operations to - create users, - manage groups, - assign permissions, and maintain security logs, all while adhering to best practices in system administration. **## Block-by-Block Explanation** **Block 1**: Root Privilege Check and Input Validation ``` # Check if the script is run with root privileges if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi # Check if the input file is provided as an argument if [ $# -ne 1 ]; then echo "Please run this instead: $0 <name-of-text-file>" exit 1 fi ``` Purpose: Ensures the script is executed with root privileges to perform administrative tasks. Functionality: Verifies if the input file (name-of-text-file) containing user and group data is provided as an argument. Error Handling: Terminates execution with an error message if conditions are not met, guiding proper usage. **Block 2**: File and Directory Initialisation ``` INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure the log and password files exist and have the correct permissions touch "$LOG_FILE" chmod 644 "$LOG_FILE" mkdir -p "$(dirname "$PASSWORD_FILE")" touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" chown root:root "$PASSWORD_FILE" ``` Purpose: Prepares necessary files and directories for logging and password storage. Functionality: - Initialises variables for input file path (INPUT_FILE), log file path (LOG_FILE), and password file path (PASSWORD_FILE). - Creates or ensures existence of log file and password file with appropriate permissions (644 for logs, 600 for passwords). - Sets ownership of the password file to root for enhanced security. - Error Handling: The code snippet `mkdir -p "$(dirname "$PASSWORD_FILE")"` ensures that the path leading to the password file is created if not in existence before. **Block 3**: Logging Function ``` # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } ``` Purpose: Facilitates logging of script activities with timestamps. Functionality: - Defines log_message() function to prepend current timestamp to log messages. - Appends (not overwrite) messages to the log file ($LOG_FILE) using tee -a for both real time display on the terminal and logging purposes into the file specified. **Block 4**: User and Group Management Loop ``` # Read the input file and process each line while IFS=";" read -r username groups; do # Trim any leading or trailing whitespace from username and groups username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Skip empty lines or lines with empty username if [ -z "$username" ]; then continue fi # Create the primary group with the same name as the username if ! getent group "$username" > /dev/null; then groupadd "$username" log_message "Group $username created." else log_message "Group $username already exists." fi # Create the user with the primary group if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" "$username" log_message "User $username created with primary group $username." else log_message "User $username already exists." fi # Add user to additional groups if [ -n "$groups" ]; then usermod -aG "$(echo $groups | tr ',' ' ')" "$username" log_message "User $username added to groups: $groups." fi # Generate a random password for the user password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd log_message "Password set for user $username." # Store the password securely echo "$username,$password" >> "$PASSWORD_FILE" done < "$INPUT_FILE" log_message "User creation script completed successfully." exit 0 ``` Purpose: Implements core functionalities for user and group management. Functionality: - Input Parsing: Reads each line from the input file, extracting username and groups using semicolon (;) as delimiter. - Group Management: Ensures creation of a primary group with the same name as the username. - User Management: Creates users if they do not exist, assigns home directories (-m flag), and manages group memberships by adding users to additional groups as contained in the input file. - Password Management: Generates random passwords securely using OpenSSL command "rand'and stores them in $PASSWORD_FILE. - Logging: Logs each action with descriptive messages and timestamps by calling the earlier declared log_message() function. ## **Conclusion** The create_users.sh script exemplifies efficient automation in Linux system administration, offering robust capabilities in user and group management. By following this structured approach, administrators can enhance operational efficiency, maintain security standards, and streamline user provisioning tasks across diverse IT environments. As an initiative dedicated to nurturing talent in technology, the HNG Internship provides invaluable opportunities for aspiring developers and system administrators to enhance their skills. By leveraging tools like Bash scripting, professionals can streamline administrative tasks and contribute effectively to organizational objectives. To learn more about the HNG Internship and its impact on tech enthusiasts worldwide, visit [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire).
mukhtar_devops
1,910,612
Navigating Auto Care at Your Local Warren Mechanic Shop
When it comes to maintaining your vehicle's performance and ensuring safety on the road, finding a...
0
2024-07-03T18:52:41
https://dev.to/detroitmuffler/navigating-auto-care-at-your-local-warren-mechanic-shop-gp1
When it comes to maintaining your vehicle's performance and ensuring safety on the road, finding a reliable Warren mechanic shop is crucial. From routine maintenance to complex repairs, a full-service auto repair center in Warren provides an array of essential services designed to keep your car running smoothly. This article explores the diverse offerings of such a mechanic shop and the importance of professional automotive care. Comprehensive Services for Optimal Vehicle Health At a well-established [Warren mechanic shop](https://urlgeni.us/google_places/Auto-Body-Repair-Warren), you can expect a wide range of services that cover every aspect of your vehicle’s needs. A proficient team skilled in muffler & brakes will ensure that your car stops as smoothly as it goes, reducing the risk of brake failure or noise issues. Tires are another critical element; whether you need seasonal tire changes, rotations, or balancing, the right shop will have you covered. Transmission troubles can be particularly daunting for any car owner. A reputable mechanic shop in Warren will offer transmission services to address any shifting issues or fluid leaks, helping to prevent costly breakdowns and extend your vehicle's lifespan. Specialized Repair Services for Every Situation Occasionally, vehicles may experience more severe damageβ€”whether due to wear and tear or unfortunate collisions. That's when specialized collision repair services come into play. Skilled technicians use state-of-the-art tools and techniques to restore your vehicle’s bodywork and functionality after an accident. General repairs encompass various issues from fixing squeaky belts to addressing electrical problems. A reliable Warren mechanic shop should be equipped with advanced diagnostic equipment to quickly identify issues and perform necessary repairs efficiently. For engine repair needs, professionals at these shops diagnose and fix problems ranging from simple spark plug replacements to more complex engine rebuilds or replacements. Adapting to All Vehicle Types with Light and Heavy Repair Solutions Different vehicles require different types of attention, which is why light and heavy repair capabilities are indicators of an adaptable mechanic shop in Warren. Light repair typically involves smaller cars and straightforward fixes while heavy repair deals with larger vehicles like trucks or SUVs that might require more robust equipment and expertise. Electrical systems in modern cars are increasingly intricate; hence an electrical repair service is vital. Technicians must stay up-to-date with evolving technologies to tackle everything from faulty wiring to computer system malfunctions effectively. The Importance of Selecting a Versatile Mechanic Shop Choosing a Warren mechanic shop that provides comprehensive services offers convenience and peace of mind. Instead of hopping from one specialty shop to another for different services, drivers can rely on one trusted location for all their auto care needsβ€”saving time and ensuring consistency in service quality. Moreover, building a relationship with your local mechanics means they become familiar with your vehicle’s historyβ€”a significant advantage when diagnosing new problems or recommending preventive measures tailored specifically for your car. In conclusion, selecting the right Warren mechanic shop is about more than just solving immediate automotive troublesβ€”it's about entrusting your vehicle’s performance and longevity into capable hands. With services ranging from routine tire maintenance to complex collision repairs, these shops play an essential role in keeping our roads safer by ensuring every car they handle is in top-notch condition. Whether dealing with mufflers & brakes issues or requiring extensive engine repairsβ€”the right local mechanics make sure you drive away confident that all aspects of your car have been expertly cared for. Detroit Muffler and Brakes Address: [5173 8 Mile Rd, Warren, Michigan 48091](https://www.google.com/maps?cid=12621360660182579283) Phone: 586-759-0000 Website: [https://www.detroitmufflerbrakes.com/](https://www.detroitmufflerbrakes.com/)
detroitmuffler
1,910,156
Automating User and Group Management Using Bash Script
Bash, short for Bourne Again Shell, is a Unix shell and command language that has been a fundamental...
0
2024-07-03T21:33:59
https://dev.to/florenceokoli/automating-user-and-group-management-using-bash-script-4a7o
bash, devops, sysadmin, cloud
Bash, short for Bourne Again Shell, is a Unix shell and command language that has been a fundamental part of system administration and development for many years. As the default shell on many Unix-like operating systems, including Linux and macOS, Bash is renowned for its powerful capabilities in scripting and automation. Imagine having to create user accounts, assign them to appropriate groups, set up home directories, and generate secure passwords for each new developer joining your team. Doing this manually can be tedious and error-prone. This is where Bash comes in. You can automate these repetitive tasks with Bash scripting, ensuring consistency and saving valuable time. In this article, I used a Bash script to automate the onboarding process for new developers. This script reads a text file containing usernames and their respective groups, creates users, assigns them to groups, sets up their home directories, generates random passwords, and logs all actions. ## Prerequisites * A Unix-based system (Linux or macOS) * Basic knowledge of Unix commands and Bash scripting ## Objective The script should create users and groups as specified, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions to /var/log/user_management.log. Additionally, store the generated passwords securely in /var/secure/user_passwords.txt ## My create_users.sh Script ``` #!/bin/bash # Log file and password storage LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure the /var/secure directory exists if [ ! -d "/var/secure" ]; then mkdir -p /var/secure chmod 700 /var/secure fi # Ensure the log file and password file exist and have correct permissions touch $LOG_FILE touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } # Check if the file is supplied if [ $# -ne 1 ]; then log_message "ERROR: No input file supplied" echo "Usage: $0 <name-of-text-file>" exit 1 fi INPUT_FILE=$1 # Check if input file exists if [ ! -f $INPUT_FILE ]; then log_message "ERROR: Input file does not exist" echo "ERROR: Input file does not exist" exit 1 fi # Read the input file line by line while IFS=';' read -r username groups; do # Trim whitespace username=$(echo $username | xargs) groups=$(echo $groups | xargs) # Skip empty lines if [ -z "$username" ]; then continue fi # Create a personal group for the user if ! getent group $username >/dev/null; then groupadd $username log_message "Group $username created." else log_message "Group $username already exists." fi # Create the user with the personal group if ! id -u $username >/dev/null 2>&1; then useradd -m -g $username $username log_message "User $username created." else log_message "User $username already exists." fi # Assign the user to additional groups IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo $group | xargs) if [ ! -z "$group" ]; then if ! getent group $group >/dev/null; then groupadd $group log_message "Group $group created." fi usermod -aG $group $username log_message "User $username added to group $group." fi done # Generate a random password password=$(openssl rand -base64 12) # Set the user's password echo "$username:$password" | chpasswd log_message "Password set for user $username." # Store the password securely echo "$username,$password" >> $PASSWORD_FILE done < "$INPUT_FILE" # Set the correct permissions on the password file chmod 600 $PASSWORD_FILE log_message "User creation process completed." echo "User creation process completed. Check $LOG_FILE for details." ``` ## A detailed breakdown of the Script Here is what this script does: * Shebang The `#!/bin/bash` known as the shebang indicates that the script should be run on the BASH shell. * Path to Log file and Password file ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" ``` The `LOG_FILE` and `PASSWORD_FILE` are variables that store the path to the log file and password file respectively. * Create /var/secure directory and set permission if it doesn't exist ``` if [ ! -d "/var/secure" ]; then mkdir -p /var/secure chmod 700 /var/secure fi ``` * Create Log file and Password file if they don't exist and set permission ``` touch $LOG_FILE touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE ``` `touch $LOG_FILE` and `touch $PASSWORD_FILE` creates the log and password files if they do not already exist. `chmod 600 $PASSWORD_FILE` sets the permissions for the password file so that only the file owner can read and write to it. * Log message function ``` log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } ``` The `log_message` function takes a message as an argument and appends it to the log file with a timestamp. * Check if input file is provided ``` if [ $# -ne 1 ]; then log_message "ERROR: No input file supplied" echo "Usage: $0 <name-of-text-file>" exit 1 fi ``` This checks if `[ $# -ne 1 ]; then` checks if exactly one argument (the input file) is supplied to the script. If not, it logs an error, prints a usage message, and exits the script. * `INPUT_FILE=$1` assigns the first argument (input file) to the variable INPUT_FILE * Check if the input file exists ``` if [ ! -f $INPUT_FILE ]; then log_message "ERROR: Input file does not exist" echo "ERROR: Input file does not exist" exit 1 fi ``` The `if [ ! -f $INPUT_FILE ]; then` checks if the input file exists. If not, it logs an error, prints an error message, and exits the script.Cre * Read and process the input file The `while IFS=';' read -r username groups; do` starts a loop to read the input file line by line, expecting each line to contain a username and groups separated by a semicolon (;). * The command below trims whitespace from the username and groups variables if any. ``` username=$(echo $username | xargs) groups=$(echo $groups | xargs) ``` While ``` if [ -z "$username" ]; then continue fi ``` Skips empty lines (where username is empty). * Create a group for user ``` if ! getent group $username >/dev/null; then groupadd $username log_message "Group $username created." else log_message "Group $username already exists." fi ``` This creates a personal group for the user if it doesn't exist and logs the action. * Creates user for group ``` if ! id -u $username >/dev/null 2>&1; then useradd -m -g $username $username log_message "User $username created." else log_message "User $username already exists." fi ``` This creates the user with their group if they don't already exist and logs the action. * Creates additional groups for users. ``` IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo $group | xargs) if [ ! -z "$group" ]; then if ! getent group $group >/dev/null; then groupadd $group log_message "Group $group created." fi usermod -aG $group $username log_message "User $username added to group $group." fi done ``` This command splits the group's string into an array, iterates over it, checks if the group exists (creating it if necessary) and adds the user to the group, logging each action. * Generate a random password The `password=$(openssl rand -base64 12)` generates a random password using the OpenSSL * Set the user's password ``` echo "$username:$password" | chpasswd log_message "Password set for user $username." echo "$username,$password" >> $PASSWORD_FILE done < "$INPUT_FILE" ``` The code above sets the user's password and then appends the username and password to the PASSWORD_FILE * This `done < "$INPUT_FILE" ` code ends the loop that reads from the INPUT FILE * The `chmod 600 $PASSWORD_FILE` code ensures the password file's permissions are secure after all passwords have been added. * Log completion of the user creation process `log_message "User creation process completed."` ## Test the Script To test this script, let's create a `users.txt` file `nano users.txt` In the text file, enter the usernames and groups `wendy; engineering,webteam florenceokoli; admins, dev_team chi; support` ## Execute the Script `chmod create_users.sh ## Run the Script `sudo ./create_users.sh users.txt` ## Output * Management Log ![Management Log File](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n8tuyvsuqe1ew8se087.png) * Password.txt File ![Password.txt file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w94og8h91738x2k3e20u.png) ## Conclusion Bash is a powerful scripting tool used to automate various tasks on Unix-like operating systems. This script is designed to read a text file containing usernames and their respective groups, create users and their personal groups, assign users to additional groups, set up home directories, generate random passwords, and log all these actions for auditing purposes. Additionally, it stores the generated passwords securely in a dedicated file. This project is a stage 1 task in the Devops HNG-11 Internship. For more information about the HNG Internship and its various opportunities, visit [HNG Internship](https://hng.tech/) and [HNG Hire](https://hng.tech/hire).
florenceokoli
1,910,850
Servidores
Un servidor es un sistema de computaciΓ³n que proporciona servicios, recursos y datos a otros...
0
2024-07-03T21:32:35
https://dev.to/fernandomoyano/servidores-l1c
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33agzh8go7pc0tj0es9d.jpg) Un **servidor** es un sistema de computaciΓ³n que proporciona servicios, recursos y datos a otros dispositivos, denominados clientes, a travΓ©s de una red. Puede referirse tanto al hardware (servidor fΓ­sico) como al software (programa servidor) que gestiona y ofrece estos servicios. Los servidores son fundamentales en la arquitectura de redes y permiten el funcionamiento de aplicaciones web, almacenamiento de datos, correo electrΓ³nico, juegos en lΓ­nea, entre otros. A continuaciΓ³n, se explica con mΓ‘s detalle: Un **servidor web** es un software y hardware que utiliza HTTP (Hypertext Transfer Protocol) y otros protocolos para responder a las solicitudes de los clientes hechas a travΓ©s de la World Wide Web. **Su funciΓ³n principal es entregar contenido web a los usuarios finales.** AquΓ­ hay una explicaciΓ³n mΓ‘s detallada de sus componentes y funciones: ### 🧩Servidor Hardware Un **servidor hardware** es un equipo fΓ­sico diseΓ±ado para ejecutar servicios, aplicaciones y administrar recursos de red. EstΓ‘ compuesto por componentes especializados y robustos que le permiten manejar grandes cargas de trabajo y funcionar de manera continua con alta disponibilidad. A continuaciΓ³n, se describen sus componentes y caracterΓ­sticas clave: Componentes de un Servidor Hardware **Procesador (CPU):** - `MultinΓΊcleo:` Los servidores suelen tener procesadores con mΓΊltiples nΓΊcleos y/o mΓΊltiples procesadores para manejar tareas concurrentes y procesos intensivos en cΓ‘lculo. - `Arquitectura x86 o ARM:` Las arquitecturas mΓ‘s comunes para servidores son x86 y ARM, siendo la primera la mΓ‘s tradicional. **Memoria (RAM):** - `Capacidad Alta:` Los servidores suelen tener una gran cantidad de memoria RAM para soportar mΓΊltiples aplicaciones y usuarios simultΓ‘neamente. - ` Memoria ECC (Error-Correcting Code):` Utiliza memoria ECC para detectar y corregir errores de datos en tiempo real, mejorando la estabilidad y confiabilidad. **Almacenamiento:** - `Discos Duros (HDD) y Unidades de Estado SΓ³lido (SSD)`: Los servidores pueden tener una combinaciΓ³n de HDDs para almacenamiento masivo y SSDs para rendimiento rΓ‘pido. - `Arreglos RAID:` Utilizan configuraciones RAID (Redundant Array of Independent Disks) para mejorar la redundancia y el rendimiento del almacenamiento. **Placa Base (Motherboard):** - `Soporte para MΓΊltiples CPUs:` La placa base de un servidor suele soportar mΓΊltiples procesadores. - ` Conectividad y ExpansiΓ³n:` Proporciona mΓΊltiples ranuras de expansiΓ³n para tarjetas de red, almacenamiento adicional, y otros componentes. **Fuente de AlimentaciΓ³n:** - `Redundante y de Alta Capacidad:` Las fuentes de alimentaciΓ³n en los servidores son redundantes para asegurar la continuidad del servicio y soportar la alta demanda energΓ©tica. **Sistema de RefrigeraciΓ³n:** - `Enfriamiento Avanzado:` Incluye sistemas de refrigeraciΓ³n avanzada para mantener las temperaturas operativas bajo control, incluso en condiciones de alta carga. **Conectividad de Red:** - `Interfaces de Red de Alta Velocidad:` Equipado con interfaces de red de alta velocidad, como Ethernet Gigabit o 10 Gigabit, para manejar grandes volΓΊmenes de trΓ‘fico de red. ### 🧩Servidor Software Un **servidor software** es un programa o conjunto de programas que proporcionan servicios a otros programas o dispositivos, conocidos como clientes. Estos servicios pueden incluir alojamiento de sitios web, gestiΓ³n de bases de datos, envΓ­o de correos electrΓ³nicos, entre otros. A continuaciΓ³n, se explican los principales aspectos de un servidor software: Tipos Comunes de Servidores Software **Servidor Web:** - `DescripciΓ³n:` Gestiona solicitudes HTTP o HTTPS desde navegadores web y devuelve contenido web como HTML, CSS, JavaScript, imΓ‘genes, etc. - `Ejemplos:` Apache HTTP Server, Nginx, Microsoft Internet Information Services (IIS). **Servidor de Base de Datos:** - `DescripciΓ³n:` Proporciona servicios para el almacenamiento, recuperaciΓ³n y manipulaciΓ³n de datos estructurados en una base de datos. - `Ejemplos:` MySQL, PostgreSQL, Microsoft SQL Server, MongoDB. **Servidor de Correo:** - `DescripciΓ³n:` Maneja el envΓ­o, recepciΓ³n y almacenamiento de correos electrΓ³nicos. - `Ejemplos:` Microsoft Exchange Server, Postfix, Sendmail. **Servidor FTP:** - `DescripciΓ³n:` Proporciona servicios para la transferencia de archivos entre sistemas a travΓ©s del Protocolo de Transferencia de Archivos (FTP). - ` Ejemplos:` vsftpd, ProFTPD, FileZilla Server. **Servidor de Aplicaciones:** - `DescripciΓ³n:` Ejecuta aplicaciones empresariales y proporciona un entorno para desarrollar y desplegar aplicaciones. - `Ejemplos:` Apache Tomcat, JBoss, WebSphere. **Servidor de Juegos:** - `DescripciΓ³n:` Gestiona el funcionamiento de videojuegos multijugador en lΓ­nea. - `Ejemplos:` Minecraft Server, Counter-Strike: Global Offensive (CS ) Server. # Certificado SSL --- **SSL** **(Secure Sockets Layer)** es un protocolo de seguridad diseΓ±ado para establecer una conexiΓ³n cifrada entre un servidor web y un navegador web. Este protocolo garantiza que todos los datos transmitidos entre el servidor y el navegador permanezcan privados e Γ­ntegros. Aunque SSL ha sido reemplazado en gran medida por su sucesor, TLS (Transport Layer Security), el tΓ©rmino SSL todavΓ­a se utiliza comΓΊnmente para referirse a esta tecnologΓ­a de cifrado. **Cifrado:** - `DescripciΓ³n:` SSL cifra los datos transmitidos entre el servidor y el cliente, lo que impide que los atacantes intercepten y lean la informaciΓ³n durante su transmisiΓ³n. - `Beneficio:` Protege la confidencialidad de la informaciΓ³n, como datos personales, contraseΓ±as, y detalles de tarjetas de crΓ©dito. **AutenticaciΓ³n:** - `DescripciΓ³n:` SSL utiliza certificados digitales para verificar la identidad del servidor, asegurando que los usuarios se estΓ‘n conectando al servidor legΓ­timo y no a un impostor. - `Beneficio:` Previene ataques de phishing y suplantaciΓ³n de identidad, asegurando que los usuarios estΓ‘n comunicΓ‘ndose con la entidad correcta. **Integridad de los Datos:** - `DescripciΓ³n:` SSL asegura que los datos transmitidos no sean alterados o corrompidos durante su transferencia. - `Beneficio:` Garantiza que la informaciΓ³n recibida por el destinatario sea la misma que se enviΓ³ originalmente, sin modificaciones. ### 🧩CΓ³mo Funciona SSL Establecimiento de la ConexiΓ³n SSL: **Handshake SSL:** El proceso de establecimiento de una conexiΓ³n SSL se llama "handshake" y consta de los siguientes pasos: **Solicitud de ConexiΓ³n:** El cliente se conecta al servidor y solicita una conexiΓ³n segura. **EnvΓ­o del Certificado:** El servidor envΓ­a su certificado digital al cliente, que contiene la clave pΓΊblica del servidor. **VerificaciΓ³n del Certificado:** El cliente verifica la validez del certificado con una autoridad certificadora (CA, por sus siglas en inglΓ©s). **GeneraciΓ³n de la Clave Secreta:** El cliente genera una clave secreta y la cifra con la clave pΓΊblica del servidor, luego envΓ­a la clave cifrada al servidor. **Desencriptado de la Clave Secreta:** El servidor desencripta la clave secreta con su clave privada. **Establecimiento de la SesiΓ³n Cifrada:** Ambas partes utilizan la clave secreta para cifrar y descifrar los datos transmitidos durante la sesiΓ³n. # Protocolo HTTPS --- La "S" en "HTTPS" significa **"Secure"** (seguro). HTTPS es una versiΓ³n segura del protocolo HTTP (Hypertext Transfer Protocol). Este protocolo se utiliza para la transferencia de datos en la web, y la adiciΓ³n de la "S" indica que la comunicaciΓ³n entre el navegador del usuario y el servidor web estΓ‘ cifrada y, por lo tanto, es segura. AquΓ­ tienes mΓ‘s detalles sobre HTTPS: **Cifrado.** - `DescripciΓ³n:` HTTPS utiliza el protocolo SSL (Secure Sockets Layer) o su sucesor TLS (Transport Layer Security) para cifrar los datos que se transmiten entre el cliente (generalmente un navegador web) y el servidor. - `Beneficio:` El cifrado protege la informaciΓ³n sensible, como contraseΓ±as, datos personales y detalles de tarjetas de crΓ©dito, impidiendo que los atacantes intercepten y lean estos datos durante su transmisiΓ³n. **Integridad de datos.** - `DescripciΓ³n:` HTTPS asegura que los datos transmitidos no sean alterados o corrompidos durante su transferencia. - `Beneficio:` La integridad de los datos garantiza que la informaciΓ³n recibida por el destinatario es la misma que se enviΓ³ originalmente, sin modificaciones. **Autenticacion.** - `DescripciΓ³n:` HTTPS verifica la identidad del sitio web mediante un certificado digital emitido por una autoridad certificadora de confianza. - `Beneficio:` La autenticaciΓ³n asegura que los usuarios estΓ‘n comunicΓ‘ndose con el sitio web legΓ­timo y no con un impostor, ayudando a prevenir ataques de phishing y suplantaciΓ³n de identidad. - Habilitar protocolo HTTPS # Dominio --- Un **dominio** es un nombre fΓ‘cil de recordar que se utiliza para identificar direcciones **IP** en Internet. Los dominios permiten a los usuarios acceder a sitios web y otros servicios en lΓ­nea sin tener que recordar secuencias numΓ©ricas complejas. En esencia, un dominio actΓΊa como una direcciΓ³n amigable y legible para los humanos que redirige a una ubicaciΓ³n especΓ­fica en la red. ### 🧩Registro de un Dominio **Elegir un Nombre de Dominio:** - `DescripciΓ³n:` Selecciona un nombre ΓΊnico que represente el propΓ³sito del sitio web y que sea fΓ‘cil de recordar. - `Disponibilidad:` Comprueba si el nombre de dominio deseado estΓ‘ disponible usando un servicio de registro de dominios. **Registrar el Dominio:** - `Registrar:` Utiliza un registrador de dominios acreditado, como GoDaddy, Namecheap o Google Domains, para registrar el nombre de dominio. - `DuraciΓ³n:` Los dominios suelen registrarse por periodos de uno a diez aΓ±os, con opciΓ³n de renovaciΓ³n. ### 🧩Registro de dominios en Argentina Paso a Paso Detallado **Crear una Cuenta en NIC Argentina:** Accede a NIC Argentina y haz clic en "Crear cuenta". Completa el formulario de registro con tu informaciΓ³n personal. Verifica tu correo electrΓ³nico para activar la cuenta. **Buscar Disponibilidad del Dominio:** En la pΓ‘gina principal de NIC Argentina, usa la barra de bΓΊsqueda para verificar la disponibilidad del nombre de dominio que deseas. Si estΓ‘ disponible, haz clic en "Solicitar". **Completar el Formulario de Registro:** Proporciona la informaciΓ³n requerida, como: Nombre del registrante. DirecciΓ³n de correo electrΓ³nico. InformaciΓ³n de contacto (administrativo, tΓ©cnico, etc.). Configura los servidores DNS si ya los tienes, o puedes hacerlo despuΓ©s. **Realizar el Pago:** Selecciona el mΓ©todo de pago. Completa la transacciΓ³n segΓΊn las instrucciones proporcionadas. Guarda el comprobante de pago para futuros registros. **ConfirmaciΓ³n del Registro:** Revisa tu correo electrΓ³nico para la confirmaciΓ³n del registro. Una vez confirmado, el dominio estarΓ‘ activo y listo para usar. # Hosting --- En el contexto de la informΓ‘tica y especialmente en el Γ‘mbito de la red de Internet, un **"host"** se refiere a cualquier dispositivo conectado a una red que participe en las comunicaciones de red. Este tΓ©rmino es ampliamente utilizado en diversos contextos tΓ©cnicos y puede referirse a diferentes tipos de dispositivos: **Host en Redes de Computadoras:** En redes informΓ‘ticas, un host es cualquier dispositivo que tiene una direcciΓ³n IP asignada y que puede ser accesible a travΓ©s de la red. Esto incluye computadoras personales, servidores, impresoras de red, enrutadores y otros dispositivos conectados a la red. **Host como Servidor de Internet:** MΓ‘s especΓ­ficamente, en el contexto de los servicios en lΓ­nea, un host se refiere a un servidor que proporciona servicios a travΓ©s de Internet. Por ejemplo, un servidor web es un tipo de host que almacena y sirve sitios web a los usuarios que acceden a travΓ©s de sus navegadores. **Host como Proveedor de Hosting:** Un proveedor de hosting (como Donweb, Bluehost, GoDaddy, etc.) es una empresa que ofrece servicios de alojamiento web. En este caso, el tΓ©rmino "host" se refiere al servidor fΓ­sico (o virtual) donde se almacenan los archivos y datos de un sitio web, permitiendo que este sea accesible en Internet. ### 🧩**caracteristicas de un hosting** **Espacio disponible:** - `DefiniciΓ³n:` Es la cantidad de espacio en disco que el proveedor de hosting asigna para almacenar los archivos de tu sitio web, como pΓ‘ginas HTML, imΓ‘genes, videos, bases de datos, etc. - `Importancia:` Un espacio adecuado asegura que puedas almacenar todos los contenidos de tu sitio web sin problemas de capacidad. **Ancho de banda disponible:** - `DefiniciΓ³n:` Es la cantidad mΓ‘xima de datos que pueden transferirse desde y hacia tu sitio web en un periodo de tiempo determinado (generalmente mensual). - `Importancia:` Un ancho de banda suficiente garantiza que tu sitio web pueda manejar el trΓ‘fico de visitantes sin experimentar lentitud o interrupciones debido a lΓ­mites de datos. **Transferencia de datos mensual:** - `DefiniciΓ³n:` Es el lΓ­mite de datos que puedes transferir desde y hacia tu sitio web en un mes especΓ­fico. - `Importancia:` Este lΓ­mite asegura que no excedas los recursos permitidos por tu plan de hosting, evitando costos adicionales o la suspensiΓ³n del servicio por exceso de uso. **Cuentas de Correo:** - `DefiniciΓ³n:` Son las direcciones de correo electrΓ³nico personalizadas que puedes crear utilizando tu propio dominio (por ejemplo, [email protected]). - `Importancia:` Proporcionan una apariencia profesional y facilitan la comunicaciΓ³n con tus usuarios bajo tu propio dominio, ademΓ‘s de permitir gestionar mΓΊltiples cuentas de correo segΓΊn tus necesidades. **Seguridad:** - `DefiniciΓ³n:` Incluye medidas como certificados SSL para protecciΓ³n de datos, firewalls, copias de seguridad regulares y detecciΓ³n de malware. - `Importancia:` Garantiza la integridad y protecciΓ³n de los datos almacenados en tu sitio web contra accesos no autorizados, ataques cibernΓ©ticos y pΓ©rdidas de informaciΓ³n crΓ­tica. **Buen soporte tΓ©cnico:** - `DefiniciΓ³n:` Disponibilidad de personal capacitado para resolver problemas tΓ©cnicos y brindar asistencia rΓ‘pida y efectiva a los usuarios del servicio de hosting. - `Importancia:` Un soporte tΓ©cnico eficiente y accesible es fundamental para resolver cualquier problema o duda relacionada con el funcionamiento del sitio web y garantizar su disponibilidad continua. ### 🧩Hostinger **Precios** - `Planes EconΓ³micos:` Hostinger es conocido por ofrecer algunos de los planes de hosting mΓ‘s econΓ³micos en el mercado, especialmente en su nivel de entrada. - `RelaciΓ³n Calidad-Precio:` Ofrece una buena relaciΓ³n calidad-precio con caracterΓ­sticas robustas incluso en planes de bajo costo. **CaracterΓ­sticas:** - `Rendimiento:` Utiliza tecnologΓ­a de almacenamiento en cachΓ©, servidores LiteSpeed y almacenamiento SSD para mejorar el rendimiento del sitio web. - `Panel de Control:` Utiliza un panel de control personalizado llamado hPanel, que es intuitivo y fΓ‘cil de usar. - `Escalabilidad:` Ofrece una gama de opciones que permiten escalar fΓ‘cilmente desde hosting compartido a VPS y hosting en la nube. `Integraciones:` IntegraciΓ³n con servicios como Cloudflare, G Suite y herramientas de SEO. **Soporte TΓ©cnico:** - `AtenciΓ³n al Cliente:` Soporte 24/7 a travΓ©s de chat en vivo y un sistema de tickets. `Recursos de Aprendizaje:` Extensa base de conocimientos y tutoriales en lΓ­nea. Mercado Objetivo: - `Global:` Hostinger es un proveedor global con servidores en varias regiones del mundo, lo que lo hace adecuado para un pΓΊblico internacional. - `NVMe (Non-Volatile Memory Express)` es una interfaz y un protocolo de comunicaciΓ³n creado para SSDs que utilizan memoria flash NAND, aprovechando las capacidades de las conexiones PCIe (Peripheral Component Interconnect Express) para mejorar drΓ‘sticamente la velocidad de transferencia de datos y la eficiencia en comparaciΓ³n con las interfaces mΓ‘s antiguas como SATA (Serial ATA). - La migraciΓ³n automΓ‘tica de un sitio web implica mover todos los archivos, datos, configuraciones y, a veces, incluso las aplicaciones asociadas de un servidor de alojamiento web a otro, utilizando herramientas y scripts automatizados que facilitan el proceso. Esta tΓ©cnica se emplea para minimizar el tiempo de inactividad y reducir el riesgo de errores manuales durante la migraciΓ³n. **ΒΏQuΓ© es una CDN?** - Una CDN es una red de servidores ubicados en diferentes lugares alrededor del mundo. Su funciΓ³n principal es almacenar en cachΓ© y entregar contenido estΓ‘tico y dinΓ‘mico de sitios web a los usuarios finales desde el servidor mΓ‘s cercano a la ubicaciΓ³n de cada usuario. Esto reduce la latencia, mejora la velocidad de carga y aumenta la disponibilidad del contenido. **Implicaciones de tener una CDN** gratuita en el servicio de hosting Mejor rendimiento y velocidad: - `ReducciΓ³n de la latencia:` Al entregar contenido desde el servidor mΓ‘s cercano al usuario, la CDN reduce la distancia que los datos deben recorrer, lo que disminuye la latencia y acelera el tiempo de carga de la pΓ‘gina. - `OptimizaciΓ³n de recursos:` Al distribuir la carga de trΓ‘fico entre mΓΊltiples servidores, se reduce la presiΓ³n sobre el servidor principal de alojamiento, mejorando el rendimiento general del sitio. Mejor experiencia del usuario: - `Velocidades de carga mΓ‘s rΓ‘pidas:` Los usuarios experimentarΓ‘n tiempos de carga mΓ‘s rΓ‘pidos, lo que puede aumentar la satisfacciΓ³n del usuario y reducir las tasas de rebote. - `Acceso global:` Los visitantes de cualquier parte del mundo podrΓ‘n acceder al contenido de manera rΓ‘pida y eficiente, independientemente de su ubicaciΓ³n geogrΓ‘fica. **La protecciΓ³n DDoS** estΓ‘ndar es una medida de seguridad implementada por servicios de hosting y proveedores de redes para proteger sitios web y aplicaciones contra ataques de denegaciΓ³n de servicio distribuida (DDoS). Los ataques DDoS intentan hacer que un sitio web o servicio en lΓ­nea no estΓ© disponible para sus usuarios previstos al sobrecargarlo con una enorme cantidad de trΓ‘fico desde mΓΊltiples fuentes. **ΒΏQuΓ© es un ataque DDoS?** Un ataque DDoS implica el uso de mΓΊltiples sistemas comprometidos para inundar el objetivo, como un servidor web, con una cantidad excesiva de trΓ‘fico. Este trΓ‘fico puede sobrecargar el servidor, agotando sus recursos y haciΓ©ndolo inaccesible para los usuarios legΓ­timos. Los ataques DDoS pueden ser devastadores, causando tiempos de inactividad significativos y pΓ©rdidas financieras. # Diferencia entre Dominio y hosting --- **Dominio:** Es la direcciΓ³n web que identifica un sitio en Internet. **Hosting:** Es el servicio que proporciona el espacio en servidores para almacenar y servir los archivos del sitio web. # Planes de hosting --- ### 🧩Algunas opciones - LattingCloud - Hostinger - donweb - towebs - mesi - hostrentable ### 🧩Porque hay diferencias de precio? **Hosting Compartido (Servidor Compartido)** - `AnalogΓ­a:` Te alojas en una casa con una familia; tienes tu propia habitaciΓ³n, pero compartes el baΓ±o, la cocina y otros espacios comunes. **CaracterΓ­sticas:** - `Recursos Compartidos:` Varios sitios web comparten los mismos recursos del servidor (CPU, RAM, ancho de banda, etc.). - `Costo:` Es la opciΓ³n mΓ‘s econΓ³mica debido a la comparticiΓ³n de recursos. - `Rendimiento:` Puede ser afectado si otros sitios web en el mismo servidor experimentan un alto trΓ‘fico. - `GestiΓ³n:` El proveedor de hosting gestiona la mayorΓ­a de las configuraciones del servidor, lo cual es conveniente para usuarios con menos experiencia tΓ©cnica. - `Ideal para:` Sitios web pequeΓ±os o personales con poco trΓ‘fico y presupuesto limitado. **Servidor VPS (Servidor Privado Virtual)** - `AnalogΓ­a:` Vives en la misma casa, pero tu habitaciΓ³n es mΓ‘s grande, tienes baΓ±o propio, un microondas para calentar comida y un buen cerrojo en la puerta de tu habitaciΓ³n. **CaracterΓ­sticas:** - `Recursos Dedicados:` Cada VPS tiene una porciΓ³n de los recursos del servidor asignada exclusivamente, lo que mejora el rendimiento y la estabilidad. - `Costo:` MΓ‘s caro que el hosting compartido, pero mΓ‘s econΓ³mico que un servidor dedicado. - `Rendimiento:` Menos afectado por otros usuarios del servidor, proporcionando mayor fiabilidad y capacidad de manejar trΓ‘fico moderado. - ` GestiΓ³n:` Ofrece mΓ‘s control y opciones de configuraciΓ³n que el hosting compartido; algunos proveedores ofrecen gestiΓ³n parcial o total del servidor. - `Ideal para:` Sitios web con trΓ‘fico moderado, pequeΓ±as empresas o aplicaciones que requieren mΓ‘s recursos y control que los proporcionados por el hosting compartido. **Servidor Dedicado** - `AnalogΓ­a:` Eres el dueΓ±o de la casa y no la compartes con nadie, o en todo caso, decides tΓΊ con quiΓ©n la compartes. **CaracterΓ­sticas:** - `Recursos Exclusivos:` Todo el servidor estΓ‘ dedicado a tu sitio web, proporcionando el mΓ‘ximo rendimiento y estabilidad. - `Costo:` Es la opciΓ³n mΓ‘s costosa debido a la exclusividad de los recursos. - `Rendimiento:` Ofrece el mejor rendimiento, capaz de manejar grandes volΓΊmenes de trΓ‘fico sin degradaciΓ³n del servicio. - `GestiΓ³n:` Requiere mayor conocimiento tΓ©cnico para gestionar y mantener el servidor; algunos proveedores ofrecen servicios gestionados. - `Ideal para:` Sitios web grandes, aplicaciones con altos requisitos de recursos, tiendas en lΓ­nea con mucho trΓ‘fico y empresas que necesitan mΓ‘xima seguridad y rendimiento. # Cloud hosting --- El Cloud Hosting, tambiΓ©n conocido como **hosting en la nube,** es una forma de alojamiento web que utiliza recursos de mΓΊltiples servidores fΓ­sicos interconectados y distribuidos en la nube. En lugar de depender de un ΓΊnico servidor fΓ­sico como en el hosting tradicional, el cloud hosting utiliza una red de servidores virtuales para alojar los datos y aplicaciones de los usuarios. ### 🧩Principales ventajas **Rentabilidad:** - `DefiniciΓ³n:` El cloud hosting suele ofrecer un modelo de pago por uso, lo que significa que solo pagas por los recursos que consumes. Esto puede resultar mΓ‘s econΓ³mico que los planes tradicionales de hosting que tienen tarifas fijas, especialmente para sitios web con fluctuaciones en el trΓ‘fico. - `Beneficios:` Permite a las empresas controlar y optimizar sus costos operativos al escalar recursos solo cuando sea necesario, evitando gastos innecesarios en infraestructura subutilizada. **Escalabilidad:** - `DefiniciΓ³n:` Permite aumentar o disminuir dinΓ‘micamente los recursos (como CPU, RAM, almacenamiento) segΓΊn las necesidades del sitio web o aplicaciΓ³n. - `Beneficios:` Facilita manejar picos de trΓ‘fico repentinos sin experimentar caΓ­das en el rendimiento, asegurando que el sitio web siempre estΓ© disponible y responda de manera eficiente a la demanda del usuario. **IntegraciΓ³n:** - `DefiniciΓ³n:` Muchos proveedores de cloud hosting ofrecen una amplia gama de servicios y herramientas integradas que facilitan la gestiΓ³n y el desarrollo de aplicaciones en la nube. - `Beneficios:` Facilita la implementaciΓ³n de soluciones complejas, como bases de datos, servicios de almacenamiento, anΓ‘lisis de datos y herramientas de desarrollo, todo dentro del mismo entorno de cloud hosting, lo que simplifica la administraciΓ³n y mejora la eficiencia operativa. **Flexibilidad:** - `DefiniciΓ³n:` Permite a los usuarios configurar y personalizar fΓ‘cilmente su entorno de hosting segΓΊn sus necesidades especΓ­ficas, ajustando recursos y configuraciones de software de manera rΓ‘pida y eficiente. - `Beneficios:` Proporciona la capacidad de adaptarse rΓ‘pidamente a cambios en los requisitos del negocio o tecnolΓ³gicos, permitiendo a las empresas innovar y escalar de manera mΓ‘s Γ‘gil en un entorno competitivo. **Fiabilidad:** - `DefiniciΓ³n:` El cloud hosting generalmente ofrece redundancia y alta disponibilidad mediante la distribuciΓ³n de la carga de trabajo a travΓ©s de mΓΊltiples servidores y centros de datos. - `Beneficios:` Minimiza el riesgo de tiempo de inactividad debido a fallos de hardware, asegurando una mayor continuidad del servicio y una experiencia consistente para los usuarios finales. **Seguridad:** - `DefiniciΓ³n:` Los proveedores de cloud hosting suelen ofrecer medidas de seguridad avanzadas, como cifrado de datos, firewalls, monitoreo de red y detecciΓ³n de intrusiones. - `Beneficios:` Protege los datos y las aplicaciones contra amenazas cibernΓ©ticas, garantizando la integridad y confidencialidad de la informaciΓ³n almacenada en la nube. ### 🧩CΓ³mo funciona? El funcionamiento del cloud hosting se basa en la distribuciΓ³n de recursos informΓ‘ticos a travΓ©s de una red de servidores interconectados y virtualizados que componen la "nube". **Infraestructura Distribuida:** Un proveedor de cloud hosting opera mΓΊltiples servidores fΓ­sicos distribuidos en varios centros de datos. Estos servidores estΓ‘n interconectados a travΓ©s de una red de alta velocidad. **VirtualizaciΓ³n:** Utiliza tecnologΓ­as de virtualizaciΓ³n para crear instancias virtuales de servidores dentro de estos servidores fΓ­sicos. Cada instancia virtual puede actuar como un servidor independiente con su propio sistema operativo, recursos (CPU, RAM, almacenamiento) y configuraciones especΓ­ficas. **Escalabilidad DinΓ‘mica:** Los recursos como CPU, RAM y almacenamiento pueden asignarse y reasignarse dinΓ‘micamente segΓΊn la demanda. Esto permite escalar verticalmente (aumentar recursos de una instancia) o horizontalmente (agregar mΓ‘s instancias) de manera rΓ‘pida y eficiente. **Alta Disponibilidad y Redundancia:** Los datos y aplicaciones se replican y distribuyen a travΓ©s de mΓΊltiples servidores dentro de la red de la nube. Si un servidor fΓ­sico falla, otro puede asumir la carga de trabajo automΓ‘ticamente, asegurando una alta disponibilidad y minimizando el riesgo de tiempo de inactividad. **AdministraciΓ³n Centralizada:** Los usuarios pueden gestionar y configurar sus recursos a travΓ©s de una interfaz de administraciΓ³n web proporcionada por el proveedor de cloud hosting. Esto incluye la capacidad de monitorear el rendimiento, realizar copias de seguridad, escalar recursos y configurar la seguridad. **Modelo de Pago por Uso:** Muchos proveedores de cloud hosting ofrecen un modelo de pago por uso, donde los usuarios solo pagan por los recursos que consumen. Esto puede incluir tiempo de CPU utilizado, almacenamiento utilizado, transferencia de datos, entre otros, lo que permite a las empresas controlar sus costos de manera efectiva. # Utilizar un Hosting --- ### 🧩Hosting gratuito - infinityFree Los registros CNAME (Canonical Name) personalizados son un tipo de registro DNS que se utilizan para alias o redireccionar un dominio o subdominio a otro dominio. En el contexto que mencionas, "Custom CNAME Records" se refiere a la capacidad de agregar registros CNAME especΓ­ficos para los dominios asociados a tu cuenta de hosting, por ejemplo, para integrar servicios de terceros como Google Apps. **ΒΏQuΓ© es un Registro CNAME?** Un registro CNAME es un tipo de entrada en el sistema DNS (Domain Name System) que asigna un alias a un nombre de dominio. EspecΓ­ficamente, un CNAME redirige un dominio o subdominio a otro dominio. Esto es ΓΊtil cuando deseas que mΓΊltiples dominios o subdominios se dirijan al mismo contenido, sin tener que duplicar los registros A (que apuntan a una direcciΓ³n IP especΓ­fica). # Cliente FTP (Filezilla) --- **FileZilla** es un **cliente FTP (File Transfer Protocol)** y SFTP (SSH File Transfer Protocol) gratuito y de cΓ³digo abierto. EstΓ‘ diseΓ±ado para facilitar la transferencia de archivos entre tu computadora local y un servidor remoto. FileZilla es ampliamente utilizado por desarrolladores web, administradores de sistemas y cualquier persona que necesite transferir archivos a travΓ©s de Internet de manera segura y eficiente. ### 🧩CaracterΓ­sticas Principales de FileZilla: **Interfaz de Usuario Amigable:** - `Explorador de Archivos:` FileZilla proporciona una interfaz grΓ‘fica que muestra el sistema de archivos local y el remoto en dos paneles separados, permitiendo arrastrar y soltar archivos para transferirlos. - `Gestor de Sitios:` Permite guardar las configuraciones de conexiΓ³n para varios servidores, facilitando la gestiΓ³n de mΓΊltiples sitios. **Protocolos Soportados:** - `FTP:` El protocolo estΓ‘ndar para la transferencia de archivos. - `FTPS:` FTP sobre TLS/SSL, que proporciona una capa adicional de seguridad. - `SFTP:` Transferencia de archivos sobre SSH, lo que ofrece una seguridad robusta. **Seguridad:** - ` AutenticaciΓ³n SSH:` Para conexiones SFTP, FileZilla soporta la autenticaciΓ³n mediante claves SSH. - `EncriptaciΓ³n TLS/SSL:` Para conexiones FTPS, asegurando que los datos transferidos estΓ©n cifrados y protegidos. **GestiΓ³n de Transferencias:** - `Cola de Transferencia:` Permite ver y gestionar las transferencias de archivos en curso y pendientes. - `ReanudaciΓ³n de Transferencias:` Si una transferencia se interrumpe, FileZilla puede reanudarla desde donde se quedΓ³. **Compatibilidad Multiplataforma:** - `Sistemas Operativos:` FileZilla estΓ‘ disponible para Windows, macOS y Linux, lo que lo hace accesible para una amplia gama de usuarios. **Funciones Adicionales:** - `Soporte para IPv6:` Permite conexiones en redes que utilizan el protocolo IPv6. - `LΓ­mites de Velocidad:` Los usuarios pueden establecer lΓ­mites de velocidad para las transferencias de archivos. - `Marcadores:` Facilita el acceso rΓ‘pido a directorios frecuentemente utilizados. ### 🧩Uso de FileZilla: **ConexiΓ³n a un Servidor:** - `ConfiguraciΓ³n:` Introduce el nombre del host, el puerto (21 para FTP, 22 para SFTP), el nombre de usuario y la contraseΓ±a. - `ConexiΓ³n RΓ‘pida:` Utiliza la barra de conexiΓ³n rΓ‘pida para conectarse a un servidor sin guardar la configuraciΓ³n. **Transferencia de Archivos:** - `Arrastrar y Soltar:` Arrastra archivos desde el panel local al panel remoto para subirlos, o viceversa para descargarlos.` - ` Cola de Transferencia:` Observa el progreso de las transferencias y maneja mΓΊltiples transferencias simultΓ‘neamente. **GestiΓ³n de Archivos:** - `Operaciones BΓ‘sicas:` Renombrar, eliminar y crear nuevos directorios tanto en el sistema de archivos local como en el remoto. ### 🧩Ejemplos de Uso: **Desarrollo Web:** Subir archivos de un sitio web a un servidor web. **Backup:** Descargar copias de seguridad de archivos almacenados en un servidor remoto. **AdministraciΓ³n de Sistemas:** Transferir archivos de configuraciΓ³n y scripts a servidores remotos.
fernandomoyano
1,910,848
Automating User and Group Creation with Bash: A Practical Guide
As a SysOps engineer, managing user accounts and groups efficiently is a crucial task. Automation...
0
2024-07-03T21:31:49
https://dev.to/josephomotosho/automating-user-and-group-creation-with-bash-a-practical-guide-38ne
As a SysOps engineer, managing user accounts and groups efficiently is a crucial task. Automation through scripting can significantly streamline this process, ensuring consistency and saving time. In this guide, we'll walk through a bash script that automates the creation of users and groups based on a provided text file. This script also sets up home directories, generates random passwords, and securely logs all actions. ## Script Overview Our script, `create_users.sh`, performs the following tasks: 1. Reads a text file containing usernames and group names. 2. Creates users and personal groups. 3. Assigns users to additional groups. 4. Generates and assigns random passwords. 5. Logs all actions to `/var/log/user_management.log`. 6. Stores passwords securely in `/var/secure/user_passwords.txt`. Script Breakdown **Input Validation**: ```bash if [ $# -ne 1 ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi ``` **File and Directory Setup**: ```bash USER_FILE=$1 LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" mkdir -p /var/secure touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE touch $LOG_FILE ``` **Logging Function**: ```bash log_action() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } ``` **Reading and Processing the Input File**: ```bash while IFS=';' read -r username groups; do username=$(echo $username | xargs) groups=$(echo $groups | xargs) [ -z "$username" ] && continue ... done < $USER_FILE ``` **User and Group Creation**: ```bash if ! getent group $username > /dev/null; then groupadd $username log_action "Created group: $username" fi if ! id -u $username > /dev/null 2>&1; then useradd -m -g $username -s /bin/bash $username log_action "Created user: $username with personal group: $username" fi ``` **Assigning Additional Groups**: ```bash if [ -n "$groups" ]; then IFS=',' read -ra ADDITIONAL_GROUPS <<< "$groups" for group in "${ADDITIONAL_GROUPS[@]}"; do group=$(echo $group | xargs) if ! getent group $group > /dev/null; then groupadd $group log_action "Created group: $group" fi usermod -aG $group $username log_action "Added user $username to group: $group" done fi ``` **Generating and Storing Passwords**: ```bash PASSWORD=$(openssl rand -base64 12) echo "$username:$PASSWORD" | chpasswd log_action "Set password for user: $username" echo "$username,$PASSWORD" >> $PASSWORD_FILE ``` **Conclusion** This bash script automates the user management process, ensuring efficiency and security. By integrating this script into your system administration routine, you can handle user accounts and groups with ease. For more resources and to explore internship opportunities, visit [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire).
josephomotosho
1,910,849
Linux User Creation Automation with Bash Script
Introduction Manually creating users and adding them to groups is a tedious and...
0
2024-07-03T21:27:55
https://dev.to/ella-adeka/linux-user-creation-automation-with-bash-script-5dma
## Introduction Manually creating users and adding them to groups is a tedious and error-prone task, consuming valuable time and energy that could be better spent on more productive activities. Automating this process eliminates these issues, offering a more consistent, efficient, and time-saving solution. In this article, we'll walk through a bash script designed to read user details from a text file, create users and their specific groups, and log all activities. ## Script Overview The full script is available in a GitHub repository at [here](https://github.com/ella-adeka/hngstage1_linuxusercreation.git). Let's walk through the steps in the script of automating user management (creation): **1. Verify the input file** Check that an input file has been specified. It ensures that the number of arguments provided is exactly one. If not, it outputs an error message with usage instructions and exits the script. ```bash # Verify that an input file has been specified # checks the number of arguments ($#) is not equal to 1 (-ne) if [[ $# -ne 1 ]]; then echo "Error: No input file specified." echo "usage: $(basename "$0") <input_file>" exit 1 fi ``` **2. Set the files as variables** Define variables for the log file and the password file. `LOG_FILE` is used to record all actions taken by the script, and `PASSWORD_FILE` is where the generated passwords will be stored securely. ```bash # Log file to log all actions LOG_FILE="/var/log/user_management.log" # store generated passwords in user_passwords.txt PASSWORD_FILE="/var/secure/user_passwords.txt" ``` **3. Ensure both files exist** Ensure that the log file and the password file exist. This section of the script creates the files if they don't already exist and sets the appropriate permissions on the password file to ensure only the owner can read it. ```bash # Ensure both files exist touch $LOG_FILE mkdir -p /var/secure touch $PASSWORD_FILE # Set permissions for password file chmod 600 $PASSWORD_FILE ``` **4. Install password generator package** This script is designed specifically for Ubuntu. If you're using a different Linux distribution, you'll need to adapt the package installation commands accordingly. For Ubuntu, you can install the `pwgen` utility, which generates random passwords, with the following commands: ```bash # install pwgen to generate random password sudo apt-get update sudo apt-get install -y pwgen # verify installation if ! pwgen --version &>/dev/null; then echo "Error: pwgen installation failed." | tee -a $LOG_FILE exit 1 fi ``` **5. Create function to generate password using the installed package** The `generate_password` function utilises the `pwgen` utility to generate secure, random passwords. By default, it creates a 12-character password, but you can specify a different length by passing a parameter to the function. The `-s` option ensures the password is completely random. ```bash generate_password() { local password_length=${1:-12} pwgen -s $password_length 1 } ``` **6. Create function to create user** ```bash create_user() { # Create user group with same name as user # check iff user exists # create user with group # set permissions # generate random password and set it # store password securely # add user to group local user=$1 shift local groups=$@ # Check if user already exists if id "$user" &>/dev/null; then echo "User $user already exists." | tee -a $LOG_FILE return 0 fi if ! sudo groupadd "$user" 2>>$LOG_FILE; then echo "Failed to create group $user." | tee -a $LOG_FILE return 1 fi # Create user and set their primary group if ! sudo useradd -m -g "$user" "$user" 2>>$LOG_FILE; then echo "Failed to create user $user." | tee -a $LOG_FILE return 1 fi # Create additional groups if specified and add the user to them if [[ -n "$groups" ]]; then for group in $groups; do if ! getent group "$group" &>/dev/null; then if ! sudo groupadd "$group" 2>>$LOG_FILE; then echo "Failed to create group $group." | tee -a $LOG_FILE return 1 fi fi if ! sudo usermod -aG "$group" "$user" 2>>$LOG_FILE; then echo "Failed to add user $user to group $group." | tee -a $LOG_FILE return 1 fi done fi # Generate random password local password password=$(generate_password) if ! echo "$user:$password" | sudo chpasswd; then echo "Failed to set password for user $user." | tee -a $LOG_FILE return 1 fi # Store password securely echo "$user:$password" >>$PASSWORD_FILE # Set permissions for user's home directory if ! sudo chmod 700 "/home/$user"; then echo "Failed to set permissions for home directory of user $user." | tee -a $LOG_FILE return 1 fi # Log the user creation echo "Created user $user with groups: $groups" | tee -a $LOG_FILE } ``` **7. Read the file and create users** This section of the script reads the user information from the specified file. It processes each line to extract the username and groups, removing any leading or trailing whitespace. The groups are converted from a comma-separated list to a space-separated list. Finally, it calls the create_user function to create the user and assign the specified groups. ```bash while IFS=";" read - r user groups; do # remove whitespaces before and after username user=$(echo $user | xargs) # remove whitespaces before and after group name groups=$(echo $groups | xargs | tr ',' ' ') create_user $user $groups done < "$1" ``` **8. Output once script has run successfully** Output a message indicating that the user has been created successfully. The message is also appended to the log file to keep a record of the script's actions. ```bash echo "User created successfully." | tee -a $LOG_FILE ``` ## Conclusion Automating user management with a bash script streamlines administrative tasks, ensuring consistency and security. By following the steps in this script, you can efficiently manage user accounts and groups in a Linux environment. ## Learn more Learn more about HNG internships and opportunities below: [HNG internship](https://hng.tech/internship) [HNG Premium](https://hng.tech/premium)
ella-adeka
1,910,843
Stop Using Jest, Mocha, Chai !!
This three frameworks are a nightmare to install and work with typescript! Instead do: bun i -D...
0
2024-07-03T21:14:50
https://dev.to/fershopls/stop-using-jest-mocha-chai--4b1c
This three frameworks are a nightmare to install and work with typescript! Instead do: `bun i -D vitest` then `bun run vitest`
fershopls
1,910,610
THIS IS A TEST
THIS IS A TEST THIS IS A TEST
0
2024-07-03T18:50:18
https://dev.to/alyconr/this-is-a-test-10g0
# THIS IS A TEST <p>THIS IS A TEST</p>
alyconr
1,910,841
BASH SCRIPT AUTOMATION FOR NEW USERS ONBOARDING
Managing users and groups on a Linux system can be a cumbersome task, especially when dealing with a...
0
2024-07-03T21:13:51
https://dev.to/emmanuel_joseph_85d6a165f/bash-script-automation-for-new-users-onboarding-149j
webdev, bash, bashscript, devops
Managing users and groups on a Linux system can be a cumbersome task, especially when dealing with a large number of users, automating the creation and management of user accounts can save significant time and reduce errors. This article will walk through a Bash script designed to automate this process, ensuring that new users and their respective groups, logs and passwords are created seamlessly. **Overview** This script performs the following tasks: 1. Checks and installs necessary dependencies. 2. Creates secure files and sets appropriate permissions. 3. Reads user and group data from an input file. 4. Creates groups and users as specified. 5. Assign passwords to users and store them securely. 6. Log all actions for auditing purposes. **The Script** Here's a detailed look at the Bash script designed to manage user accounts: ``` #!/bin/bash # Log file and secure passwords file LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure the secure passwords file exists and set the correct permissions sudo mkdir -p /var/secure sudo touch $PASSWORD_FILE sudo chmod 600 $PASSWORD_FILE # Function to generate a random password generate_password() { openssl rand -base64 12 } # Check if openssl is installed if ! command -v openssl &> /dev/null; then echo "openssl is required but not installed. Please install it and try again." >&2 exit 1 fi #Read the input file line by line while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Create a personal group with the same name as the username if ! getent group "$username" > /dev/null 2>&1; then if sudo groupadd "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$username' created." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating group '$username'." >> "$LOGFILE" continue fi else echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$username' already exists." >> "$LOGFILE" fi # Create the user if it does not exist if ! id -u "$username" > /dev/null 2>&1; then if sudo useradd -m -s /bin/bash -g "$username" "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' created." >> "$LOGFILE" # Generate a random password for the user password=$(generate_password) echo "$username:$password" | sudo chpasswd echo "$username:$password" | sudo tee -a "$PASSWORD_FILE" > /dev/null # Set ownership and permissions for the user's home directory sudo chown "$username":"$username" "/home/$username" sudo chmod 700 "/home/$username" echo "$(date '+%Y-%m-%d %H:%M:%S') - Password for '$username' set and stored securely." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating user '$username'." >> "$LOGFILE" continue fi else echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' already exists." >> "$LOGFILE" fi # Add user to additional groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" > /dev/null 2>&1; then if sudo groupadd "$group"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - Group '$group' created." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error creating group '$group'." >> "$LOGFILE" continue fi fi if sudo usermod -aG "$group" "$username"; then echo "$(date '+%Y-%m-%d %H:%M:%S') - User '$username' added to group '$group'." >> "$LOGFILE" else echo "$(date '+%Y-%m-%d %H:%M:%S') - Error adding user '$username' to group '$group'." >> "$LOGFILE" fi done done < "$1" ``` **KEY FEATURES** Logging: All actions, including successes and errors, are logged to `/var/log/user_management.log`. *Password Security:* Generated passwords are stored securely in `/var/secure/user_passwords.txt` with appropriate permissions. *User and Group Creation:* Ensures users and their personal groups are created, even if the group name is not provided in the input. *Error Handling:* Checks for existing users and groups to avoid conflicts. *Running the Script* To run this script, follow these steps: 1. To make the Script Executable: `chmod +x create_users.sh` 2. Run the Script with Sudo: `sudo ./create_users.sh user_list.txt` 3. Reading the Input File: The script reads each line from the input file containing usernames and groups separated by a semicolon. Multiple groups are separated by commas. Example Input File: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` This input creates users Light,idimma, and Mayowa assigning them to the specified groups. **Conclusion** Automating user management with a Bash script can streamline the onboarding process for new developers, ensuring consistency and security. This script provides a robust solution for creating user accounts and managing group memberships efficiently. For more insights and resources on improving your SysOps skills, explore the HNG Internship and discover how to hire talented developers from the [HNG platform] https://hngtech/hire. For more articles and resources, visit (https://hng.tech/premium)
emmanuel_joseph_85d6a165f
1,901,312
Paralelismo e ConcorrΓͺncia 102: Java parallel streams na prΓ‘tica
Frite sua CPU com o poder do parallel streams
0
2024-07-03T21:10:07
https://dev.to/hugaomarques/paralelismo-e-concorrencia-102-java-parallel-streams-na-pratica-21g4
java, parallel, concurrency, threads
--- title: Paralelismo e ConcorrΓͺncia 102: Java parallel streams na prΓ‘tica published: true description: Frite sua CPU com o poder do parallel streams tags: #java #parallel #concurrency #threads cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfszopvhn1gmdbe085z8.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-03 --- No artigo anterior, ["Paralelismo e ConcorrΓͺncia 101"](https://dev.to/hugaomarques/paralelismo-e-concorrencia-101-2pgc), exploramos os conceitos fundamentais sobre esses dois tΓ³picos. Discutimos como esses conceitos permitem que programas realizem mΓΊltiplas tarefas simultaneamente, melhorando o desempenho e a eficiΓͺncia. Neste segundo artigo da sΓ©rie, vamos nos aprofundar no uso do `parallel stream` em Java. Introduzido no Java 8, o `parallel stream` Γ© uma funcionalidade que facilita o processamento paralelo de coleçáes, aproveitando mΓΊltiplos nΓΊcleos da CPU para melhorar o desempenho de operaçáes em grandes volumes de dados. Vamos explorar como o `parallel stream` funciona, suas vantagens e desvantagens, e como personalizar o pool de threads utilizado. TambΓ©m discutiremos a tΓ©cnica de "work-stealing" implementada pelo `ForkJoinPool` e sua importΓ’ncia para o balanceamento de carga e eficiΓͺncia. Vamos lΓ‘ que temos muito conteΓΊdo pra cobrir! ## Tabela de conteΓΊdo - [Parallel streams: Paralelismo de forma fΓ‘cil](#parallel-stream-paralelismo-de-forma-fΓ‘cil) - [Paralelismo e concorrΓͺncia](#paralelismo-e-concorrΓͺncia) - [ ForkJoinPool - Γ© o que homi 😳?](#forkjoinpool-Γ©-o-que-homi-) - [Work-stealing no ForkJoinPool](#-raw-workstealing-endraw-no-forkjoinpool) - [Performance entre Sequential e Parallel](#performance-entre-raw-sequential-endraw-e-raw-parallel-endraw-) - [ConclusΓ£o](#conclusΓ£o) ## Parallel stream: Paralelismo de forma fΓ‘cil O Java 8 introduziu streams como uma nova forma de iterar e realizar operaçáes em coleçáes de forma declarativa. Os streams fornecem uma API rica para manipulação de dados, permitindo operaçáes como filtro, mapeamento, redução e muito mais. Os streams podem ser criados no modo sequencial ou no modo de execução paralela. Vejamos como criar e usar ambos os tipos de streams com exemplos. ### Sequential ```java import java.util.Arrays; import java.util.List; public class SequentialStreamExample { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); // Criando um stream sequencial numbers.stream() .forEach(n -> System.out.println("Thread: " + Thread.currentThread().getName() + " - NΓΊmero: " + n)); } } ``` Neste exemplo, o mΓ©todo stream() cria um stream sequencial a partir da lista de nΓΊmeros. A operação `forEach` itera sobre cada elemento da lista e imprime o nΓΊmero juntamente com o nome da thread que estΓ‘ processando o elemento. Como Γ© um stream sequencial, todos os elementos sΓ£o processados pela mesma thread. Se rodarmos o exemplo acima, nΓ³s veremos o seguinte resultado: ```java ➜ sandbox java SequentialStreamExample Thread: main - NΓΊmero: 1 Thread: main - NΓΊmero: 2 Thread: main - NΓΊmero: 3 Thread: main - NΓΊmero: 4 Thread: main - NΓΊmero: 5 Thread: main - NΓΊmero: 6 Thread: main - NΓΊmero: 7 Thread: main - NΓΊmero: 8 Thread: main - NΓΊmero: 9 Thread: main - NΓΊmero: 10 ``` ### Parallel Agora vamos rodar um exemplo com `parallel`. Para isso, basta criarmos o nosso stream com `parallel stream`: ```java import java.util.Arrays; import java.util.List; public class ParallelStreamExample { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); // Criando um stream paralelo numbers.parallelStream() .forEach(n -> System.out.println("Thread: " + Thread.currentThread().getName() + " - NΓΊmero: " + n)); } } ``` Se executarmos esse exemplo mΓΊltiplas vezes, a ordem sempre vai ser algo diferente: ```java ➜ sandbox java ParallelStreamExample Thread: ForkJoinPool.commonPool-worker-2 - NΓΊmero: 2 Thread: main - NΓΊmero: 7 Thread: ForkJoinPool.commonPool-worker-6 - NΓΊmero: 4 Thread: ForkJoinPool.commonPool-worker-3 - NΓΊmero: 5 Thread: ForkJoinPool.commonPool-worker-4 - NΓΊmero: 9 Thread: ForkJoinPool.commonPool-worker-1 - NΓΊmero: 3 Thread: ForkJoinPool.commonPool-worker-9 - NΓΊmero: 10 Thread: ForkJoinPool.commonPool-worker-8 - NΓΊmero: 6 Thread: ForkJoinPool.commonPool-worker-7 - NΓΊmero: 8 Thread: ForkJoinPool.commonPool-worker-5 - NΓΊmero: 1 ``` Existem outras formas de criar streams, tanto sequenciais quanto paralelos, mas eu deixo isso como dever de casa pra vocΓͺs πŸ€“. ## Paralelismo e concorrΓͺncia Agora que vimos um exemplo simples com `parallel`, como ele se relaciona com os conceitos que discutimos no [artigo anterior de paralelismo e concorrΓͺncia](https://dev.to/hugaomarques/paralelismo-e-concorrencia-101-2pgc)? ### Como o `parallel stream` utiliza concorrΓͺncia? Quando vocΓͺ transforma um Stream em um `parallel stream`, se possΓ­vel, o Java divide a tarefa em vΓ‘rias subtarefas que podem ser executadas simultaneamente. Cada subtarefa Γ© atribuΓ­da a uma thread separada, que pode ser executada em um nΓΊcleo diferente do processador. O gerenciamento dessas threads envolve conceitos de concorrΓͺncia, como: * Thread Pool: O `ForkJoinPool` Γ© frequentemente usado para gerenciar threads em `parallel stream`. * Sincronização: Garantia de que as operaçáes em dados compartilhados sΓ£o seguras para threads. * Troca de contexto: AtravΓ©s do work-stealing o `ForkJoinPool` pega tasks da fila de threads ocupadas para serem executadas em threads que estΓ£o desocupadas. ### Como o `parallel stream` utiliza paralelismo? O `parallel stream` aproveita o paralelismo ao executar essas subtarefas simultaneamente em mΓΊltiplos nΓΊcleos. Isso pode resultar em uma execução mais rΓ‘pida, especialmente para operaçáes que sΓ£o independentes e podem ser realizadas em paralelo sem interferΓͺncia entre si. ## ForkJoinPool - Γ© o que homi 😳 ? Por padrΓ£o, o `parallel stream` utiliza o `ForkJoinPool.commonPool()`, que Γ© um pool de threads compartilhado disponΓ­vel para todas as tarefas de fork/join. Esse pool Γ© configurado para usar um nΓΊmero de threads igual ao nΓΊmero de nΓΊcleos disponΓ­veis no processador, o que permite que as tarefas sejam executadas em paralelo de maneira eficiente. Se quisermos verificar quantas threads o nosso `forkJoinPool` terΓ‘, basta fazermos print do nΓΊmero de cores disponΓ­veis no Java. Por exemplo, se eu rodar a seguinte linha em qualquer programa Java no meu mac: ```java System.out.println("Number of cores available: "+ Runtime.getRuntime().availableProcessors()); Number of cores available: 12 ``` Notem, que o nΓΊmero acima pode ser mais complicado um pouco, em especial, quando se envolve containers. ### Modificando o tamanho do `ForkJoinPool` padrΓ£o > πŸ›‘ Dica: NΓƒO faΓ§a isso! O tamanho padrΓ£o do `ForkJoinPool.commonPool()` pode ser modificado configurando a propriedade do sistema `java.util.concurrent.ForkJoinPool.common.parallelism`. Isso pode ser feito ao iniciar a JVM com a opção -D, por exemplo: ```java java -Djava.util.concurrent.ForkJoinPool.common.parallelism=8 MinhaAplicacao ``` Este comando configura o `ForkJoinPool.commonPool()` para utilizar 8 threads. ### Por que vocΓͺ **NΓƒO** deve modificar o tamanho padrΓ£o do ForkJoinPool? Modificar o tamanho padrΓ£o do `ForkJoinPool.commonPool()` pode afetar negativamente outras partes da aplicação ou bibliotecas que tambΓ©m utilizam o pool comum. O `commonPool` Γ© um recurso compartilhado, e alterar seu comportamento pode introduzir problemas de desempenho e concorrΓͺncia difΓ­ceis de diagnosticar. Em vez disso, Γ© recomendΓ‘vel criar e utilizar um `ForkJoinPool` personalizado para tarefas especΓ­ficas que requerem paralelismo ajustado, garantindo que outras partes da aplicação permaneΓ§am estΓ‘veis e previsΓ­veis. ### Qual Γ© uma alternativa melhor? > βœ… Dica: Se necessΓ‘rio, faΓ§a isso! Para modificar o pool de threads localmente vocΓͺ pode usar o mΓ©todo `ForkJoinPool#submit` para submeter uma tarefa que executa o `parallel stream` no contexto do pool personalizado. ```java import java.util.Arrays; import java.util.List; import java.util.concurrent.ForkJoinPool; public class CustomForkJoinPoolExample { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); // Configurando o ForkJoinPool para usar um nΓΊmero especΓ­fico de threads ForkJoinPool customThreadPool = new ForkJoinPool(4); try { customThreadPool.submit(() -> numbers.parallelStream() .forEach(n -> { System.out.println("Thread: " + Thread.currentThread().getName() + " - NΓΊmero: " + n); }) ).get(); } catch (Exception e) { e.printStackTrace(); } finally { customThreadPool.shutdown(); } } } ``` Note que quando eu fiz o print anterior, eu tinha 12 cores disponΓ­veis, porΓ©m observe que ao executar o cΓ³digo acima, eu sΓ³ terei 4 threads. ```java Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 6 Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 8 Thread: ForkJoinPool-1-worker-1 - NΓΊmero: 7 Thread: ForkJoinPool-1-worker-2 - NΓΊmero: 3 Thread: ForkJoinPool-1-worker-3 - NΓΊmero: 9 Thread: ForkJoinPool-1-worker-3 - NΓΊmero: 1 Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 10 Thread: ForkJoinPool-1-worker-1 - NΓΊmero: 2 Thread: ForkJoinPool-1-worker-2 - NΓΊmero: 5 Thread: ForkJoinPool-1-worker-3 - NΓΊmero: 4 ``` TambΓ©m note que no primeiro exemplo, nΓ³s temos tarefas sendo executadas pela `main` thread. PorΓ©m, no `custom` nossas tarefas sΓ£o executadas apenas pelos workers definidos no nosso pool. Agora que sabemos sobre como criar `parallel stream` e onde eles sΓ£o executados (`ForkJoinPool`), vamos discutir mais algumas coisas legais sobre essa funcionalidade. ## `Work-Stealing` no ForkJoinPool Como mencionamos anteriormente, o `ForkJoinPool` Γ© o pool de threads padrΓ£o utilizado pelo `parallel stream`. Esse pool implementa uma tΓ©cnica chamada "work-stealing" (roubo de trabalho). Esta tΓ©cnica Γ© fundamental para garantir a eficiΓͺncia e o balanceamento de carga entre as threads. ### Como Funciona o Work-Stealing? * DivisΓ£o de Tarefas: Quando uma tarefa Γ© submetida ao `ForkJoinPool`, ela Γ© dividida em sub-tarefas menores distribuΓ­das entre as threads do pool. Cada thread mantΓ©m uma fila de tarefas. * Execução Local: Cada thread tenta executar as tarefas da sua prΓ³pria fila. Se uma thread termina suas tarefas ou ficar ociosa, ela verifica se hΓ‘ mais trabalho a ser feito. * Roubo de Trabalho: Se uma thread fica sem tarefas, ela tenta roubar tarefas das filas de outras threads. Isso Γ© feito pegando tarefas do final da fila de outra thread, enquanto a prΓ³pria thread vΓ­tima continua pegando tarefas do inΓ­cio de sua fila. Este mecanismo de roubo de trabalho ajuda a manter todas as threads ocupadas e a balancear a carga de trabalho de maneira eficiente. ### Exemplo de Observação do Work-Stealing Vamos criar um ForkJoinPool personalizado com um nΓΊmero especΓ­fico de threads e simular tarefas com diferentes tempos de execução. O objetivo Γ© observar como as threads ociosas roubam trabalho das threads ainda ocupadas. ```java import java.util.Arrays; import java.util.List; import java.util.concurrent.ForkJoinPool; public class WorkStealingExample { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); // Configurando o ForkJoinPool para usar um nΓΊmero especΓ­fico de threads ForkJoinPool customThreadPool = new ForkJoinPool(4); try { customThreadPool.submit(() -> numbers.parallelStream() .forEach(n -> { try { if (n % 2 == 0) { // Simulando uma tarefa que leva tempo Thread.sleep(2000); } else { Thread.sleep(100); } } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Thread: " + Thread.currentThread().getName() + " - NΓΊmero: " + n); }) ).get(); } catch (Exception e) { e.printStackTrace(); } finally { customThreadPool.shutdown(); } } } ``` Note que se vocΓͺ executar o cΓ³digo acima, vai haver uma tendΓͺncia das tarefas Γ­mpares terminarem primeiro. Por exemplo, uma das minhas execuçáes foi: ```java Thread: ForkJoinPool-1-worker-2 - NΓΊmero: 3 Thread: ForkJoinPool-1-worker-1 - NΓΊmero: 7 Thread: ForkJoinPool-1-worker-3 - NΓΊmero: 9 Thread: ForkJoinPool-1-worker-2 - NΓΊmero: 5 Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 6 Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 1 Thread: ForkJoinPool-1-worker-3 - NΓΊmero: 10 Thread: ForkJoinPool-1-worker-1 - NΓΊmero: 2 Thread: ForkJoinPool-1-worker-2 - NΓΊmero: 4 Thread: ForkJoinPool-1-worker-4 - NΓΊmero: 8 ``` E o que acontece se o `forkJoinPool` nΓ£o tivesse `work-stealing`? Pois bem, as threads com tarefas menores terminariam suas tarefas primeiro e ficariam ociosas, enquanto as threads com tarefas maiores estariam ocupadas, gerando um gargalo desnecessΓ‘rio na aplicação. Maravilha! VocΓͺ deve estar pensando "Agora que eu sei tudo isso, eu sempre vou criar `parallel streams` para aumentar a performance das minhas aplicaçáes!" Calma jovem padawan, como sempre, depende... ## Performance entre `Sequential` e `Parallel` Existe uma linha onde utilizar parallel nΓ£o Γ© interessante. O Java Γ© excelente em otimizar o cΓ³digo da aplicação, entΓ£o muitas vezes, um simples sequential stream() vai ter uma performance excelente! Vamos ver um exemplo com cΓ³digo: ```java // Imports removidos pra manter o cΓ³digo breve. Veja o repo para o cΓ³digo completo. @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MILLISECONDS) @State(Scope.Thread) @Fork(value = 2) public class SeqParallelBenchmark { @Param({"100", "1000000"}) private int size; private List<Integer> data; @Setup public void setup() { data = IntStream.rangeClosed(1, size).boxed().collect(Collectors.toList()); } @Benchmark public void test_sequential() { data.stream().mapToInt(Integer::intValue).sum(); } @Benchmark public void test_parallel() { data.stream().parallel().mapToInt(Integer::intValue).sum(); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(SeqParallelBenchmark.class.getName()) // specify the benchmark class here .forks(2) .build(); new Runner(opt).run(); } } ``` Quando eu rodei o benchmark acima na minha mΓ‘quina eu obtive o seguinte resultado: ```java Benchmark (size) Mode Cnt Score Error Units SeqParallelBenchmark.test_sequential 100 avgt 10 β‰ˆ 10⁻⁴ ms/op SeqParallelBenchmark.test_parallel 100 avgt 10 0.022 Β± 0.008 ms/op SeqParallelBenchmark.test_sequential 1000000 avgt 10 0.482 Β± 0.016 ms/op SeqParallelBenchmark.test_parallel 1000000 avgt 10 0.117 Β± 0.016 ms/op ``` Observe que com 100 elementos, o mΓ©todo sequential roda em 0.0001 ms enquanto que o parallel leva 0.022 ms, quase 200x mais lento. PorΓ©m, quando rodamos para milhΓ΅es de elemento, o parallel se sai 4x mais rΓ‘pido to que o resultado sequencial. Eu rodei outro benchmark com uma operação diferente dessa vez: ```java // Imports removidos pra manter o cΓ³digo breve. Veja o repo para o cΓ³digo completo. @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MILLISECONDS) @State(Scope.Thread) public class AnotherSeqParallelBenchmark { @Param({"100", "1000000"}) private int size; private List<Integer> data; @Setup public void setup() { data = IntStream.rangeClosed(1, size).boxed().collect(Collectors.toList()); } @Benchmark public List<Double> testSequentialStream() { return data.stream() .map(Math::sin) .collect(Collectors.toList()); } @Benchmark public List<Double> testParallelStream() { return data.parallelStream() .map(Math::sin) .collect(Collectors.toList()); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(AnotherSeqParallelBenchmark.class.getName()) // specify the benchmark class here .forks(2) .build(); new Runner(opt).run(); } } ``` E o resultado foi bem similar: ```java Benchmark (size) Mode Cnt Score Error Units AnotherSeqParallelBenchmark.testSequentialStream 100 avgt 10 0.001 Β± 0.001 ms/op AnotherSeqParallelBenchmark.testParallelStream 100 avgt 10 0.027 Β± 0.005 ms/op AnotherSeqParallelBenchmark.testParallelStream 1000000 avgt 10 2.234 Β± 0.025 ms/op AnotherSeqParallelBenchmark.testSequentialStream 1000000 avgt 10 16.164 Β± 0.573 ms/op ``` Note como mais uma vez em cima uma de collection pequena o parallel teve uma performance pior que o sequential (~27x pior). PorΓ©m, ao se deparar com milhΓ΅es de cΓ‘lculo a performance foi ~8x mais rΓ‘pida do que sequential. ## ConclusΓ£o **NΓ£o utilize `parallel`:** 1. **Se sua operação Γ© I/O bound**: Seja uma chamada de rede para um microserviΓ§o ou escrita de arquivos. Por quΓͺ? VocΓͺ estarΓ‘ utilizando muito mais o I/O do que a CPU, inclusive, vocΓͺ pode abrir muito mais operaçáes do que apenas o nΓΊmero de nΓΊcleos da sua aplicação. Por exemplo, em um cenΓ‘rio recente eu abri ~500 conexΓ΅es com outro microserviΓ§o 😱, muito acima dos 8 nΓΊcleos da minha mΓ‘quina. 2. **Se a sua massa de dados Γ© pequena**: `Parallel` adiciona um overhead, como vimos acima, com a utilização de uma thread pool, coordenação de tarefas, roubo de tarefas paradas, etc. Dado isso, se sua massa de dados for pequena, Γ© provΓ‘vel que um `sequential` execute mais rΓ‘pido do que um `parallel`. Na dΓΊvida, execute testes na sua aplicação para observar se vale a pena. **Utilize `parallel`:** 1. **Se o seu cenΓ‘rio Γ© CPU bound**: Transformação de dados, cΓ‘lculos, etc. Por quΓͺ? Se o nΓΊmero padrΓ£o de threads no `parallel stream` Γ© o nΓΊmero de nΓΊcleos da mΓ‘quina e sua operação Γ© CPU bound, isso significa que vocΓͺ vai conseguir utilizar todos os nΓΊcleos em paralelo, aproveitando ao mΓ‘ximo os recursos disponΓ­veis. 2. **Se as suas tarefas sΓ£o independentes**: Como vocΓͺ quer usar paralelismo, Γ© interessante que suas tarefas sejam independentes e isoladas, jΓ‘ que elas serΓ£o executadas em contextos diferentes e em uma ordem imprevisΓ­vel. 3. **Se sua mΓ©trica mostra ganho de performance**: Realize alguns testes, por exemplo, utilizando uma ferramenta como JMH. `Parallel streams` Γ© um modo fΓ‘cil de embarcar no mundo da programação paralela, jΓ‘ que a funcionalidade abstrai diversas preocupaçáes. Ela nΓ£o Γ© aplicΓ‘vel para todos os casos, mas quando BEM aplicada pode ser uma mΓ£o na roda para aumentar a performance de suas aplicaçáes. Todos os exemplos deste post estΓ£o disponΓ­veis no meu repositΓ³rio: {% github hugomarques/sandbox %} No prΓ³ximo post, vamos explorar mais o uso de threads tradicionais e thread pools com executors. Nos vemos por lΓ‘!
hugaomarques
1,910,840
Get your own piece of internet real estate in 5 minutes
There are tons of options out there for publishing content on our planet's global communication...
0
2024-07-03T21:08:14
https://dev.to/defenderofbasic/get-your-own-piece-of-internet-real-estate-in-5-minutes-1dpj
beginners, amateur, homecookedsoftware
There are tons of options out there for publishing content on our planet's global communication network: Instagram, Substack, Mastodon, YouTube, Reddit. These are all great, you should use whatever works, wherever you find your audience. There is one way of putting stuff on the internet that is rarely advertised, because no one really makes money from it: making your own website. ### You don't need to be a "coder" to have your own piece of the internet. If you can maintain a google doc, you can maintain a web page I love Robin Sloan's philosophy on this. He talks about [making software like making a home-cooked meal](https://www.robinsloan.com/notes/home-cooked-app/) > People don’t only learn to cook so they can become chefs. Some do! But many more people learn to cook so they can eat better, or more affordably. Because they want to carry on a tradition. Sometimes they learn because they’re bored! Or even because they enjoy spending time with the person who’s teaching them. > ... > This messaging app I built for, and with, my family, it won’t change unless we want it to change. There will be no sudden redesign, no flood of ads, no pivot to chase a userbase inscrutable to us. It might go away at some point, but that will be our decision. **What is this feeling? Independence? Security? Sovereignty?** This last piece is really important. Websites that I made 20 years ago still work. The companies I used for hosting don't exist anymore, but I can easily move my website anywhere else (just like copying text out of one note app into note app!) ### step 0: sign up to Glitch for web hosting Glitch.com is one website I really like that gives us free web hosting. Sign up: https://glitch.com/signup I made this template for you: ### step 1: copy this template 1. Open this template: https://glitch.com/edit/#!/personal-website-template-v2 2. Click "Remix to Edit" in the top right ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t519obg7myhim633vwoj.png) ### step 2: edit the HTML Change the title of the web page by editing the text. Text between `<h1>` tags is a "header". `<p>` stands for paragraph, etc. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jtv4hbsymmm2nyzv2t2y.gif) ### step 3: share your website 1. Click "settings" on the left, then "Edit project details" to change the domain name of your website ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jgofova7njx4we9a3jk2.png) 2. Click "open in a new window" in the preview pane ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tnf5luz3s4bq35jg7d8k.png) congrats! that is your website link, anyone on the planet with that URL can read your words!! ### That's it! - you can use this to collect links to all your work online - google or ask chatGPT "how do I do X in HTML", the sky is limit! - every website you've ever seen is HTML. You can just copy any part of it and put it on your website!!! - you can upload pictures (drag and drop them into the `Assets` tab), or embed youtube videos (click "share" then "embed") For inspiration, see: - https://neocities.org/ - https://sundaysites.cafe/ The other free web host I like to use is GitHub Pages. It's nice if you prefer editing your website files on your computer vs in the browser.
defenderofbasic
1,910,839
New Rails 7.2: Added Rate Limiting
Ruby on Rails 7.2 added rate limiting to Action Controller. It's cool and easy to use. ...
0
2024-07-03T21:05:22
https://jetthoughts.com/blog/new-rails-72-added-rate-limiting-changelog/
rails, changelog
Ruby on Rails 7.2 added rate limiting to Action Controller. It's cool and easy to use. ![rate_limit in Rails 7.2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0sfw82qnev4mx1kquvu.png) What's New? ----------- Rails now offer built-in rate limiting using Redis and Kredis. This empowers you to control how often a user can act within a given time, putting you in the driver's seat. Before: ------- ```ruby # Manually implemented rate limiting ``` After: ------ ```ruby class SessionsController < ApplicationController rate_limit to: 10, within: 3.minutes, only: :create end class SignupsController < ApplicationController rate_limit to: 1000, within: 10.seconds, by: -> { request.domain }, with: -> { redirect_to busy_controller_url, alert: "Too many signups!" }, only: :new end ``` How It Works ------------ 1. **Setup:** Ensure you have Redis and Kredis 1.7.0+. 2. **Usage:** Add `rate_limit` in your controllers. Why This Matters ---------------- This feature is great for preventing abuse and managing traffic, and it's also highly flexible. You can customize limits per action and gracefully handle exceeded limits, making it a perfect fit for your needs. Conclusion ---------- This update makes managing traffic easier. It's straightforward and effective. If you're using Rails, give it a try! For more details, check out the [official pull request](https://github.com/rails/rails/pull/50490).
jetthoughts_61
1,910,838
an exploration into web 3.0 (without blockchain) for the rest of us... or just me and my imaginary friends
As someone you support the ideas like right to repair, sematic web, and consumer driven data...
0
2024-07-03T21:03:35
https://dev.to/unit9a/an-exploration-into-web-30-without-blockchain-for-the-rest-of-us-or-just-me-and-my-imaginary-friends-1fc6
webdev, javascript, semanticweb, deno
As someone you support the ideas like right to repair, sematic web, and consumer driven data governance. I wanted to play around these ideas and expand my skills along the way. This blog will be me sharing my journey of learning and humility. I hope it encourages others to overcome imposter syndrome and fears related to being transparent about make mistakes and being imperfect. This/I will not be a safe space for, tech-bro/bra elitist culture, theocrats, toxic masculinity grifters, genderism, anti-trans/queer, ethnic/white supremacists of any sort, and traitorous king lovers/worshipers who claim loyalty to the USA constitution... fuck you... fuck you all. Inclusive & and democracy loving geeks/nerds otherwise welcome. Initial idea for my tech stack - deno.js & deno-fresh - solid-pods - user data - gun.js - main DB - couchDB - if needed maybe? Idea 1: create a pwa based family of apps to improve the experience of a tech job seeker in this job market... if at all possible. This is just the ramblings of a none expert. I don't want to contribute to the JavaScript tribal fights/wars. My ideas are not the best for anything worth fighting about. stay tuned. Reminder, no one has to be perfect to be worthy of compassion and grace. Have great day.
unit9a
1,910,837
Bash Script for Simplified Linux User and Group Setup
Using Linux in the Cloud provides a multi-user environment where multiple people can access a server...
0
2024-07-03T21:02:21
https://dev.to/seyramgabriel/bash-script-for-simplified-linux-user-and-group-setup-1jif
Using Linux in the Cloud provides a multi-user environment where multiple people can access a server and perform tasks relevant to their jobs. However, this necessitates measures to prevent users or groups from having more access to files than necessary for their respective roles. A SysOps Engineer or Linux Administrator must regulate access according to the least privilege principle. The least privilege principle is a security concept that states a user should only be given the minimum amount of access they require. Ensuring adherence to this principle can be tedious and error-prone, especially when managing a large number of users and groups. Manually adding users to a server and assigning them to relevant groups on a recurrent basis is not only time-consuming but also prone to mistakes. This creates the need to automate the task of adding users, creating groups, and assigning users to groups. Automation ensures consistency, efficiency, and reduces the likelihood of errors. Below is a bash script designed to automate the process of adding users and groups and assigning users to groups. Using this script for user and group management will ensure a consistent and efficient approach to maintaining access control in a Linux environment. ``` Script #!/bin/bash # Ensure the script is run as root if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi # Log file LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Create secure directory for password storage if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure # Function to log messages log_message() { echo "$(date +"%Y-%m-%d %T") : $1" >> $LOG_FILE } # Check if the file exists if [[ -z "$1" || ! -f "$1" ]]; then echo "Usage: $0 <path-to-username-file>" exit 1 fi # Process each line of the input file while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Check if the user already exists if id "$username" &>/dev/null; then log_message "User $username already exists." fi # Create user with a home directory useradd -m -s /bin/bash "$username" &>/dev/null if [[ $? -eq 0 ]]; then log_message "User $username created." else log_message "Failed to create user $username." fi # Create a group with the same name as the username groupadd "$username" &>/dev/null usermod -a -G "$username" "$username" # Add user to additional groups IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" >/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -a -G "$group" "$username" done # Set permissions for the home directory chmod 700 "/home/$username" chown "$username:$username" "/home/$username" #Check if password is already set if passwd -S "$username" | grep -E 'P|NP' &>/dev/null; then log_message "Password for $username already set." continue fi # Generate a random password password=$(openssl rand -base64 12) # Hash the password hashed_password=$(openssl passwd -6 "$password") # Set the hashed password for the user echo "$username:$hashed_password" | chpasswd -e # Log the hashed password for the CSV echo "$username,$password" >> $PASSWORD_FILE log_message "Password for $username set." done < "$1" # Set permissions for the password file chmod 600 $PASSWORD_FILE chown root:root $PASSWORD_FILE # Set permissions for the log file chmod 600 $LOG_FILE chown root:root $LOG_FILE log_message "USER CREATION PROCESS COMPLETED, USER CREATION PROCESS COMPLETED." exit 0 ``` ## Requirements of the Script - The script must be saved as a bash file (eg. create_users.sh). - The script takes a text file as input (an argument). - The input file should be properly formatted, each line should present usernames and groups, as in "username; group1,group2,..." - The script is run as root, you must have the necessary permissions for user and group management. ## Key Features of the Script - User and Group Creation: Each user is created with a home directory and a primary group with the same name as the user. The primary group need not be stated in the input file since it will be created using the username. - Group Assignment: Users are assigned to additional groups as specified in the input file. - Password Generation and Security: A random password is generated for each user, stored securely in /var/secure/- user_passwords.csv. - Logging: All actions are logged to /var/log/user_management.log for auditing and troubleshooting. ## Summary of Script Logic - Input Validation: The script ensures it is run as root and the input file is provided and exists. - Processing Input: Each line is processed to extract the username and groups. Whitespace is trimmed to avoid errors. - User Creation: The script will check if the user already exists. If not, it creates the user and their primary group (which is same name as username). The home directory of the user to which the user is the owner and the only one with "rwx" (read, write, execute) rights to it is also created. - Group Management: The script ensures additional groups exist (if not, it creates them) and assigns the user to these groups. - Password Management: The script generates a secure password, assigns it to the user, and logs it securely in /var/secure/user_passwords.csv. Only the root user would have access(read and write) to this file. - Logging: Each step executed by the script is logged for monitoring and auditing in /var/log/user_management.log along with the timestamp. Only the root user would have access(read and write) to this file. ## Error Handling The script includes checks to handle existing users, missing groups, and file permissions, ensuring robust operation in various scenarios. ## Script Execution: The script is executed as follows - ```./create_users.sh <path-to-username-file>``` or ```bash create_users.sh <path-to-username-file>``` You must be a root user or someone with root user privileges. Consequently, if you are not running as a root user, you must use "sudo", and that is if you are a "sudoer": ```sudo bash create_users.sh <path-to-username-file>``` Example of input file content: ``` light;sudo,dev,www-data idimma;sudo mayowa;dev,www-data ``` light, idimma, and mayowa are the usernames. sudo, dev, and www-data are the group names. Note that sudo and www-data are system wide groups and already exist, by default. ## Testing the Efficacy of the Script After Running it ### Run this command to output all current human users. ```awk -F: '$3 >= 1000 {print $1}' /etc/passwd``` Do well to look out for the groups in your input text file. Note: You could also output all existing users with ```cat /etc/passwd```But it will output all users, not just human users. ### Run the id command for a specific user ```id <username>``` eg. ```id idimma``` If the user exists, this will display information about the user. If not, it will show an error message. ### Run the following command to view and verify all the home directories of the created users ```cd /home && ls``` ### Run this to output all groups ```cat /etc/group``` ### Run this to check the existence of specific groups ```getent group <groupname>``` eg. ```getent group dev``` or ```getent group sudo``` Once you run it with the specific group name, it will show you the group (if it exists) and the users assigned to it. If not, it will show no output. ### Run this to output content of log file ```cat /var/log/user_management.log``` ### Run this command to verify the access permissions on /var/log/user_management.log ```ls -al /var/log/user_management.log``` ### Run this command to view passwords, verify if user and password are delimited by "," and passwords are hashed ```cat /var/secure/user_passwords.csv``` ### Run to output the content and verify the access permissions on /var/secure/user_passwords.csv ```ls -al /var/secure/user_passwords.csv``` ## Deeper Dive Thus far, this write-up has provided enough information for a high level documentation. The following section dives a little deeper into the work of each block of code in the script. ## Block 1 ``` #!/bin/bash ``` This line specifies that the script should be run using the Bash shell. ## Block 2 ``` # Ensure the script is run as root if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi ``` EUID - Effective User ID - is the ID of the current user. The ID of the root user is 0. Hence, this line ensures that the user running the script is a root user or has root user privileges. exit 1 - This line exits the script with an error code of 1 if the user is not a root user. ## Block 3 ``` # Log file LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` These lines define variables for the paths of the log file and the password file. So, subsequent to this block of code, $LOG_FILE is same as "/var/log/user_management.log" and $PASSWORD_FILE refers to "/var/secure/user_passwords.csv". ## Block 4 ``` # Create secure directory for password storage if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure ``` This creates the /var/secure directory and sets its permissions to 700 so that only the owner can read, write, and execute. Note that /var/log directory already exists by default, so it doesn't need to be created like the /var/secure directory. ## Block 5 ``` #Function to log messages log_message() { echo "$(date +"%Y-%m-%d %T") : $1" >> $LOG_FILE } ``` This defines a function called log_message that appends a timestamped message "$(date +"%Y-%m-%d %T")" to the log file along with texts that will serve as arguments ("$1") anytime the function is called. Effectively, this block of code is behind the logging to /var/log/user_management.log file, since the ">>" ensures that successive log messages "echoed" do not override previous ones. ## Block 6 ``` # Check if the file exists if [[ -z "$1" || ! -f "$1" ]]; then echo "Usage: $0 <path-to-username-file>" exit 1 fi ``` This checks if an argument (the path to the username file) was provided, if it exists, and if it is a regular file. If not, it prints usage instructions and exits. ## Block 7 ``` # Process each line of the input file while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) ``` This ensures the script reads each line of the input file, splitting it into username and groups using ";" as the delimiter. "xargs" is used to remove leading and trailing whitespace in the input file, this helps to avoid errors in processing the file. ## Block 8 ``` # Check if the user already exists if id "$username" &>/dev/null; then log_message "User $username already exists." fi ``` This checks if the user already exists and if so, logs a message to that effect. "log_message" is used to call the log_message function, then "User $username already exists." becomes "$1" as indicated in the log_message function. The log message here would help explain user creation error logs in Block 9, if the error is due to a user already existing. ## Block 9 ``` # Create user with a home directory useradd -m -s /bin/bash "$username" if [[ $? -eq 0 ]]; then log_message "User $username created." else log_message "Failed to create user $username." fi ``` This creates the user with a home directory and sets the default shell to /bin/bash. If successful, logs a message; otherwise, logs an error. "$?" is equivalent to the exit status of the just ended action. An exit 0 means success. A typical use case for failure here would be where the user already exists. In that case, the exit status would be 1. The script checks for this and logs an error message then moves to the next block of code. ## Block 10 ``` # Create a group with the same name as the username groupadd "$username" &>/dev/null usermod -a -G "$username" "$username" ``` This creates a group with the same name as the username and adds the user to this group. Remember, each username is used to create a primary group. ## Block 11 ``` # Add user to additional groups IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" >/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -a -G "$group" "$username" done ``` This splits the groups string by commas and processes each group. It trims whitespace, checks if the group exists, creates it if it does not exist, logs a message about the group created and adds the user to the group. Note that if the group already exists, there is no log to say it already exists, so as not to flood the log file. The user is just added. In case an already existing user has been assigned another group, this code block adds the user to that group. ### Block 12 ``` # Set permissions for the home directory chmod 700 "/home/$username" chown "$username:$username" "/home/$username" ``` This sets the permissions of the user's home directory to 700 and changes the ownership to the user and their group. This means that only the users can read, write, and executive within their respective home directories. ### Block 13 ``` # Check if password is already set if passwd -S "$username" | grep -E 'P|NP' &>/dev/null; then log_message "Password for $username already set, skipping." continue fi ``` This is used to check if the user's password is already set. It takes care of a use case where an already existing user has the password set, in which case the subsequent code blocks for setting of password must be skipped. ### Block 14 ``` # Generate a random password password=$(openssl rand -base64 12) ``` This generates a random password for the user using openssl, but the password will be in plain text. ### Block 15 ``` # Hash the password hashed_password=$(openssl passwd -6 "$password") ``` This hashes or encodes the plain text password. ### Block 16 ``` # Set the hashed password for the user echo "$username:$hashed_password" | chpasswd -e ``` This sets the hashed password for the user using chpasswd. The -e flag is used to indicate the password is already encrypted. ### Block 17 ``` # Log the hashed password for the CSV echo "$username,$password" >> $PASSWORD_FILE ``` This logs the hashed password of the user in the format "user,password" into the "/var/secure/user_passwords.csv" file. ### Block 18 ``` log_message "Password for $username set." done < "$1" ``` This logs the action of password being set into "/var/log/user_manangement.log" file. ### Block 19 ``` # Set permissions for the password file chmod 600 $PASSWORD_FILE chown root:root $PASSWORD_FILE ``` This sets the permissions of the password file to 600 (read and write for the owner only) and changes the ownership to root. This block of code ensures that only the root user has access to "/var/secure/user_passwords.csv file". ### Block 20 ``` # Set permissions for the log file chmod 600 $LOG_FILE chown root:root $LOG_FILE ``` This sets the permissions of the log file to 600 (read and write for the owner only) and changes the ownership to root. This block of code ensures that only the root user has access to "/var/slog/user_management.log file". ### Block 21 ``` log_message "USER CREATION PROCESS COMPLETED, USER CREATION PROCESS COMPLETED." exit 0 ``` This logs that the user creation process is complete and exits the script. The log message here is in upper case and repeated so as to ease the burden of studying log messages between successive script runs. ### Conclusion The script simplifies user management in a Linux environment, ensuring consistency, security, and efficiency. By automating these tasks, SysOps engineers or Linux Administrators can focus on more critical aspects of system management. ### Disclaimer: The article breaks down a bash script designed for automating user management in Linux. However, ensure it is tailored to your specific organizational needs. ## Acknowledgment: This write up was inspired by a task assigned to DevOps interns in the HNG Internship Programme. Find out more on: https://hng.tech/internship, https://hng.tech/hire, or https://hng.tech/premium.
seyramgabriel
1,910,835
Smart SysAdmin: Automating User Management on Linux with Bash Scripts
Introduction This article is a step-by-step walkthrough of a project I recently worked on,...
0
2024-07-03T21:00:16
https://dev.to/databishop/smart-sysadmin-automating-user-management-on-linux-with-bash-scripts-57mc
### Introduction This article is a step-by-step walkthrough of a project I recently worked on, where I needed to automate user management in Linux This project was one of the tasks for the HNG internship, a program designed to accelerate learning and development in the tech industry. This guide will provide an in-depth look at how I developed this bash script, explaining the reasoning behind each line of code and the approach I took. By sharing my thought process and important considerations, I aim to demonstrate my proficiency in bash scripting and system administration. Whether you are a beginner or an experienced sysadmin, this guide will help you understand the nuances of automating user management in Linux. ### Understanding the Requirements **Overview:** - Write a bash script called `create_users.sh`. - The script reads a text file containing usernames and groups. - Create users and groups as specified. - Set up home directories with appropriate permissions and ownership. - Generate random passwords for the users. - Log all actions to `/var/log/user_management.log`. - Store generated passwords securely in `/var/secure/user_passwords.csv`. **Input Format:** - Each line in the input file is formatted as `user;groups`. - Multiple groups are separated by commas `,`. - Usernames and groups are separated by a semicolon `;`. **Criteria:** - Users should be created and assigned to their groups. - Logging actions to `/var/log/user_management.log`. - Storing passwords in `/var/secure/user_passwords.csv`. --- ### Detailed Script Execution Let's break down the code step by step: #### A. Ensuring Root Privileges ```bash #!/bin/bash ``` This is the shebang line that specifies the script should be run using the bash shell. It ensures the script is executed in the correct shell environment. ```bash # Check if the script is run as root if [ "$(id -u)" -ne 0 ]; then echo "This script must be run as root" exit 1 fi ``` This section checks if the script is being run as the root user. The `id -u` command returns the user ID of the current user. The root user has a user ID of 0. If the script is not run as root (i.e., the user ID is not 0), it prints an error message and exits with a status of 1. Running as root is necessary because creating users and modifying system files requires superuser privileges. **Alternatives:** Instead of exiting, you could prompt the user to re-run the script with `sudo` or use a function to elevate privileges automatically. #### B. Setting Up Log and Secure Directory Logging actions and securely storing generated passwords are crucial for monitoring and auditing purposes. ```bash # Log file LOG_FILE="/var/log/user_management.log" SECURE_DIR="/var/secure" PASSWORD_FILE="$SECURE_DIR/user_passwords.csv" ``` These lines define variables for the paths of the log file and the secure directory. - `LOG_FILE` is the path where the script will log its actions. - `SECURE_DIR` is the directory where the password file will be stored. - `PASSWORD_FILE` is the file where generated passwords will be securely stored. ```bash # Create secure directory if it doesn't exist mkdir -p "$SECURE_DIR" chmod 700 "$SECURE_DIR" ``` - **Explanation:** - `mkdir -p "$SECURE_DIR"`: This command creates the `SECURE_DIR` directory if it doesn't already exist. The `-p` option ensures that no error is reported if the directory already exists, and it creates any parent directories as needed. - `chmod 700 "$SECURE_DIR"`: This command sets the permissions of the `SECURE_DIR` directory to `700`. This means: - The owner (root) has read, write, and execute permissions. - No permissions are granted to the group or others. This ensures the directory is secure and only accessible by the root user. ```bash # Clear the log and password files true > "$LOG_FILE" true > "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" ``` - **Explanation:** - `true > "$LOG_FILE"` and `true > "$PASSWORD_FILE"`: These commands clear the contents of the log file and the password file, respectively. Using `>` truncates the file to zero length if it exists, effectively clearing it. If the file does not exist, it is created. Note the importance of the `true`command as it acts as a no-op (no operation) command and always ensures that the redirection operator has a valid command associated with it, and the files will be truncated as intended. - `chmod 600 "$PASSWORD_FILE"`: This command sets the permissions of the `PASSWORD_FILE` to `600`. This means: - The owner (root) has read and write permissions. - No permissions are granted to the group or others. This ensures that the password file is secure and only readable and writable by the root user. **Alternatives:** One can use a logging framework or tool for more advanced logging capabilities and security measures. #### C. Processing the Input File The section reads an input file containing usernames and group assignments, and processes each line as needed. ```bash # Read the input file line by line while IFS=';' read -r username groups; do # Remove any leading or trailing whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Skip empty lines [ -z "$username" ] && continue ``` **Explanation** 1 **Reading the Input File Line by Line:** ```bash while IFS=';' read -r username groups; do ``` - **`while ...; do ... done`**: This is a loop that continues to execute as long as the `read` command successfully reads a line from the input. - **`IFS=';'`**: `IFS` stands for Internal Field Separator. By setting it to `;`, we are telling the `read` command to use `;` as the delimiter for splitting each line into fields. - **`read -r username groups`**: The `read` command reads a line from the input file, splits it into two parts using `;` as the delimiter, and assigns the first part to the `username` variable and the second part to the `groups` variable. - **`-r`**: This option prevents backslashes from being interpreted as escape characters. 2 **Removing Leading or Trailing Whitespace:** ```bash username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) ``` - **`$(...)`**: This is command substitution, which captures the output of the command inside the parentheses. - **`echo "$username" | xargs`**: - `echo "$username"`: Prints the value of the `username` variable. - The output of `echo` is piped to `xargs`. - **`xargs`**: This command trims any leading or trailing whitespace from its input. - The same process is applied to the `groups` variable to remove any leading or trailing whitespace. 3 **Skipping Empty Lines:** ```bash [ -z "$username" ] && continue ``` - **`[ -z "$username" ]`**: This is a test condition that checks if the `username` variable is empty. `-z` returns true if the string is of zero length. - **`&& continue`**: If the `username` variable is empty, the `continue` statement is executed. This causes the loop to skip the current iteration and proceed to the next line of input. This effectively ignores empty lines in the input file. **Alternatives:** Use more sophisticated parsing techniques or external libraries for handling input files. #### D. User and Group Management Creating users, assigning them to groups, and setting permissions are the core functionalities of the script. **Checking and Creating User** ```bash if id "$username" &>/dev/null; then echo "User $username already exists, skipping..." | tee -a "$LOG_FILE" else useradd -m -s /bin/bash -G "$username" "$username" echo "Created user $username with personal group $username" | tee -a "$LOG_FILE" fi ``` - **`if id "$username" &>/dev/null; then`**: - The `id` command checks if a user exists. It returns 0 if the user exists and non-zero if the user does not exist. - `&>/dev/null` redirects both stdout and stderr to `/dev/null`, effectively discarding any output from the `id` command. - If the user exists, the script proceeds to the `then` block; otherwise, it goes to the `else` block. - **`echo "User $username already exists, skipping..." | tee -a "$LOG_FILE"`**: - If the user exists, this message is logged. The `tee` command outputs the message to both the terminal and the log file (`$LOG_FILE`). - **`useradd -m -s /bin/bash -G "$username" "$username"`**: - If the user does not exist, the `useradd` command creates the user. - `-m`: Creates a home directory for the user. - `-s /bin/bash`: Sets the user's default shell to `/bin/bash`. - `-G "$username"`: Creates a personal group for the user with the same name as the username and adds the user to this group. - The username is specified twice because the first instance specifies the group and the second specifies the username. - **`echo "Created user $username with personal group $username" | tee -a "$LOG_FILE"`**: - Logs the creation of the new user and their personal group. **Adding User to Additional Groups** ```bash if [ -n "$groups" ]; then IFS=',' read -ra ADDR <<<"$groups" for group in "${ADDR[@]}"; do group=$(echo "$group" | xargs) # Remove whitespace if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a "$LOG_FILE" fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a "$LOG_FILE" done fi ``` - **`if [ -n "$groups" ]; then`**: - Checks if the `groups` variable is non-empty. `-n` returns true if the length of the string is non-zero. - **`IFS=',' read -ra ADDR <<<"$groups"`**: - Sets the Internal Field Separator (IFS) to `,` to split the `groups` string into an array (`ADDR`) using commas as delimiters. - `<<<"$groups"`: This is a here-string, which feeds the value of `groups` into the `read` command. - **`for group in "${ADDR[@]}"; do`**: - Iterates over each group name in the `ADDR` array. - **`group=$(echo "$group" | xargs)`**: - Removes any leading or trailing whitespace from the group name using `xargs`. - **`if ! getent group "$group" >/dev/null; then`**: - The `getent group "$group"` command checks if the group exists. - `!` negates the result, so the condition is true if the group does not exist. - `>/dev/null` discards the output of the `getent` command. - **`groupadd "$group"`**: - Creates the group if it does not exist. - **`echo "Created group $group" | tee -a "$LOG_FILE"`**: - Logs the creation of the new group. - **`usermod -aG "$group" "$username"`**: - Adds the user to the specified group using the `usermod` command. - `-aG`: Appends the user to the supplementary group(s) without removing them from other groups. - **`echo "Added user $username to group $group" | tee -a "$LOG_FILE"`**: - Logs the addition of the user to the group. **Alternatives:** One can use more advanced user management tools or scripts for handling user and group assignments. #### E. Setting Permissions and Ownership for the home directory **Setting Permissions for the Home Directory** ```bash chmod 700 "/home/$username" ``` - **`chmod 700 "/home/$username"`**: - `chmod` is a command used to change the file mode (permissions) of a file or directory. - `700` sets the permissions of the directory to: - `7` (read, write, and execute) for the owner (`username`). - `0` (no permissions) for the group. - `0` (no permissions) for others. - `"/home/$username"` specifies the path of the home directory for the user. The `$username` variable is replaced with the actual username. - This ensures that only the user can access their home directory, providing security and privacy. **Changing Ownership of the Home Directory** ```bash chown "$username":"$username" "/home/$username" ``` - **`chown "$username":"$username" "/home/$username"`**: - `chown` is a command used to change the ownership of a file or directory. - `"$username":"$username"` sets both the user owner and the group owner to the specified username. - `"/home/$username"` specifies the path of the home directory for the user. - This ensures that the user owns their home directory and can manage its contents. **Logging the Action** ```bash echo "Set permissions for /home/$username" | tee -a "$LOG_FILE" ``` - **`echo "Set permissions for /home/$username"`**: - Prints the message `"Set permissions for /home/$username"` to the terminal. - The `$username` variable is replaced with the actual username, providing a specific log entry for each user. - **`| tee -a "$LOG_FILE"`**: - The `tee` command reads from standard input and writes to both standard output and the specified file. - `-a` option tells `tee` to append to the file rather than overwrite it. - `"$LOG_FILE"` specifies the path of the log file where the message will be recorded. - This logs the action of setting permissions for the user's home directory, ensuring a record is kept of all actions performed by the script. #### F. Generating Random Passwords Let's break down the function that generates random passwords: ```bash # Function to generate random passwords generate_password() { local password_length=12 tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length } ``` **Explanation** 1 **Function Definition:** ```bash generate_password() { ``` - This line defines a shell function named `generate_password`. Functions in bash allow you to encapsulate and reuse code. 2 **Local Variable:** ```bash local password_length=12 ``` - `local` is a keyword used to declare a variable with local scope within the function. This means that `password_length` is only accessible within the `generate_password` function. - `password_length=12` sets the length of the generated password to 12 characters. You can adjust this value to generate passwords of different lengths. 3 **Generating the Password:** ```bash tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length ``` - This line generates the random password using a combination of `tr`, `head`, and `/dev/urandom`: - `/dev/urandom`: This is a special file that provides random data. It's commonly used for generating random numbers or strings. - `tr -dc A-Za-z0-9`: The `tr` command is used to translate or delete characters. In this case: - `-d` option deletes characters from the input that are not specified. - `-c` option complements the set of characters specified, effectively including only the characters in the specified set. - `A-Za-z0-9` specifies the set of characters to include: uppercase letters (`A-Z`), lowercase letters (`a-z`), and digits (`0-9`). - The output of `/dev/urandom` is piped (`|`) to `tr` to filter out any characters not in the specified set. - `head -c $password_length`: The `head` command outputs the first part of files. The `-c` option specifies the number of bytes to output. Here, it outputs the first 12 characters (`$password_length`) from the filtered random data. The result is a secure, random password of the specified length. #### G. Generating and Storing User Passwords ```bash # Generate and store the user's password password=$(generate_password) echo "$username,$password" >> "$PASSWORD_FILE" echo "Generated password for $username" | tee -a "$LOG_FILE" echo "$username:$password" | chpasswd ``` **Generating user password** ```bash password=$(generate_password) ``` - **`password=$(generate_password)`**: - This line calls the `generate_password` function (defined earlier) to generate a random password. - `$(...)` is command substitution, which captures the output of the `generate_password` function and assigns it to the `password` variable. **Storing the Username and Password** ```bash echo "$username,$password" >> "$PASSWORD_FILE" ``` - **`echo "$username,$password" >> "$PASSWORD_FILE"`**: - This line appends the username and the generated password to the password file. - `"$username,$password"` creates a comma-separated string containing the username and password. - `>>` is the append redirection operator, which appends the output to the specified file without overwriting its existing content. - `"$PASSWORD_FILE"` is the path to the file where the passwords are securely stored. **Logging the Password Generation** ```bash echo "Generated password for $username" | tee -a "$LOG_FILE" ``` - **`echo "Generated password for $username"`**: - Prints the message `"Generated password for $username"` to the terminal. - The `$username` variable is replaced with the actual username, providing a specific log entry for each user. - **`| tee -a "$LOG_FILE"`**: - The `tee` command reads from standard input and writes to both standard output and the specified file. - `-a` option tells `tee` to append to the file rather than overwrite it. - `"$LOG_FILE"` is the path of the log file where the message will be recorded. - This logs the action of generating a password for the user, ensuring a record is kept of all actions performed by the script. **Setting the User's Password** ```bash echo "$username:$password" | chpasswd ``` - **`echo "$username:$password"`**: - Creates a string in the format `username:password`, where `$username` and `$password` are replaced with the actual username and generated password. - **`| chpasswd`**: - The `chpasswd` command reads a list of username:password pairs from standard input and updates the passwords for the specified users. - This sets the password for the user to the generated password. **Alternatives:** Use password management tools or integrate with centralized authentication systems for more secure password handling. --- ### Conclusion By breaking down the script into detailed steps, we demonstrated the importance of checking for root privileges, setting up logging and secure storage, processing input files, managing users and groups, and securely handling passwords. Each part of the script was explained with code examples and alternatives, showcasing the thought process and best practices behind the implementation. As stated earlier, this project was undertaken as part of the **HNG internship**, a program designed to accelerate the learning and development of talented individuals in the tech industry. The HNG internship provides a platform for interns to work on real-world projects, enhancing their skills and preparing them for future careers. For the complete source code and detailed documentation, you can visit the [GitHub repository](https://github.com/Data-Bishop/Linux-User-Creation-Bash-Script/). To learn more about the HNG internship and its opportunities, please visit [https://hng.tech/internship](https://hng.tech/internship) or [https://hng.tech/premium](https://hng.tech/premium). Thank you for reading. ---
databishop
1,909,590
How to Create and Upload Your First Project to GitHub
Introduction Creating and uploading your first project to GitHub is a fundamental skill...
0
2024-07-03T20:57:11
https://dev.to/olsido/how-to-create-and-upload-your-first-project-to-github-k
github, git
# Introduction Creating and uploading your first project to GitHub is a fundamental skill for any developer. It provides a platform to collaborate with others, track changes, and showcase your work. This guide will walk you through the process of generating a Java project using Spring Initializr and pushing your project to GitHub. By the end of this article, you’ll have your project up and running on GitHub, ready for collaboration and version control. # Approaches to Creating and Uploading a Project to GitHub There are two common approaches to creating and uploading a project to GitHub. You can either create the project on GitHub, check it out, generate the code on top of it, and then push it to GitHub, or generate the code first and then upload it to GitHub. Both methods are effective, and the choice depends on your workflow preference. Below are the detailed steps for each approach. ## Option 1: Create Project in GitHub, Check It Out, Generate Code on Top, Then Push to GitHub 1. **Create a New Repository on GitHub:** * Go to GitHub and create a new repository. * Fill in the repository details, you may also select the options to include a README.md, .gitignore, and optionally a license. 2. **Clone the Repository Locally:** * You can do it either via IntelliJ, via File > New > Project, or using the "git clone" command from the command line. 3. **Generate Code Using Spring Initializr:** * In IntelliJ IDEA, go to File > New > Project. * Select Spring Initializr and configure your project (Group, Artifact, Dependencies, etc.). * Ensure you select the directory where you cloned the GitHub repository as the project location. 4. **Add and Commit Generated Code:** * Open the terminal in IntelliJ IDEA or your preferred terminal. * Navigate to your project directory. * Add the generated code to your local repository: ``` git add . git commit -m "Generated initial code using Spring Initializr" ``` 5. **Push Changes to GitHub:** * Push the changes to the GitHub repository: ``` git push origin <your code branch> ``` ## Option 2: Generate Code First and Then Upload to GitHub 1. **Generate Code Using Spring Initializr:** * Open IntelliJ IDEA. * Go to File > New > Project. * Select Spring Boot and configure your project (Group, Artifact, Dependencies, etc.). * Click Create, and IntelliJ IDEA will generate the project code for you. 2. **Initialize Local Git Repository:** * Open the terminal in IntelliJ IDEA or your preferred terminal. * Navigate to your project directory. * Run the following commands to initialize a local Git repository: ``` git init git add . git commit -m "Initial commit" ``` 3. **Create a New Repository on GitHub:** * Go to GitHub and create a new repository. The name of your new repository can be the same as your project directory name, but it doesn't have to be. * Fill in the repository details, and you can also select the options to include a README.md, .gitignore, and optionally a license. * Do not add any other files as they might conflict with your local repository. 4. **Push Local Repository to GitHub:** * In your terminal, link your local repository to the GitHub repository and push the code: ``` git remote add origin <your GitHub repo URL> git pull origin <GitHub branch> --allow-unrelated-histories git push -u origin <your code branch> ``` (The `--allow-unrelated-histories` option in Git is used when merging two branches that do not share a common base commit. By default, Git prevents merging histories that have no common ancestor to avoid accidentally combining unrelated projects. The `--allow-unrelated-histories` flag allows this merge to proceed.) # Conclusion By following the steps outlined in this guide, you have successfully created a Java project with Spring Initializr and pushed your project to GitHub. While this guide focused on a Java project, the principles and steps are applicable to a wide range of programming languages and frameworks. Now that your project is on GitHub, you can easily share it with others, manage contributions, and maintain a robust development workflow.
olsido
1,910,834
ROR or LARAVEL Or MERN?
which framework is most demanding in current Marketplace?
0
2024-07-03T20:56:00
https://dev.to/scorfield/ror-or-laravel-or-mern-24d
javascript, rails, laravel, react
which framework is most demanding in current Marketplace?
scorfield
1,910,833
Creating Linux Users using bash Script
Introduction In an Unix operating system, managing users and groups can be a laborious...
0
2024-07-03T20:54:18
https://dev.to/mayowa_sodipo_794f2d64923/creating-linux-users-using-bash-script-5hm6
devops, linux, bash
## **Introduction** In an Unix operating system, managing users and groups can be a laborious operation, particularly when handling several users. We can automate the creation of users and groups, configure home directories, generate random passwords, and log all activities with a Bash script, which will streamline the process. You may follow along with a detailed Bash script that completes these tasks by reading this blog article. **Prerequisites** Before we dive into the code, ensure you have a basic understanding of the Bash shell and the permission requirements for user creation on your Linux system. **The Bash Script** ``` #!/bin/bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure /var/secure exists and has the correct permissions mkdir -p /var/secure chmod 700 /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } # Function to generate random passwords generate_password() { openssl rand -base64 12 } # Function to add users, groups and set up home directories setup_user() { local username=$1 local groups=$2 # Create the user if ! id -u "$username" &>/dev/null; then password=$(generate_password) useradd -m -s /bin/bash "$username" echo "$username:$password" | chpasswd log_message "User $username created." # Store the username and password echo "$username,$password" >> "$PASSWORD_FILE" log_message "Password for $username stored." else log_message "User $username already exists." fi if ! getent group "$username" &>/dev/null; then groupadd "$username" log_message "Group $username created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." # Create groups and add user to groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." done # Set up the home directory local home_dir="/home/$username" chown "$username":"$username" "$home_dir" chmod 700 "$home_dir" log_message "Home directory set up for $username with appropriate permissions." } if [ $# -eq 0 ]; then log_message "Usage: $0 <input_file>" exit 1 fi input_file=$1 log_message "Starting users and groups script." # Read the input file and process each line while IFS=';' read -r username groups; do setup_user "$username" "$groups" done < "$input_file" log_message "Users created with password and set to groups script completed." ``` **Understanding the Script** `#!/bin/bash` The line #!/bin/bash at the beginning of a script is called a shebang (or hashbang). It specifies the path to the interpreter that should be used to run the script. In this case, it indicates that the script should be executed using the Bash shell located at /bin/bash. ``` # Check if script is running with sudo if [ "$(id -u)" -ne 0 ]; then echo "This script must be run with sudo." exit 1 fi ``` `if [ "$(id -u)" -ne 0 ]; then:` Checks if the effective user ID ($(id -u)) is not equal (-ne) to 0, which is the user ID of the root user (typically indicating sudo privileges). ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" mkdir -p /var/secure chmod 700 /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" ``` This script makes sure that a file and directory are set up securely to store user passwords. It first determines the directories for the password and log files, and if the /var/secure directory doesn't already exist, it creates it and sets its rights so that only the owner may access it. Subsequently, it generates the password file and modifies its permissions to restrict access to only the owner. This guarantees that private password data is kept safe. `log_message` function logs messages to the `$LOGFILE` path with date stamps `generate_password` function creates a 12 character long random password ``` setup_user() { local username=$1 local groups=$2 # Create the user if ! id -u "$username" &>/dev/null; then password=$(generate_password) useradd -m -s /bin/bash "$username" echo "$username:$password" | chpasswd log_message "User $username created." # Store the username and password echo "$username,$password" >> "$PASSWORD_FILE" log_message "Password for $username stored." else log_message "User $username already exists." fi if ! getent group "$username" &>/dev/null; then groupadd "$username" log_message "Group $username created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." # Create groups and add user to groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." done # Set up the home directory local home_dir="/home/$username" chown "$username":"$username" "$home_dir" chmod 700 "$home_dir" log_message "Home directory set up for $username with appropriate permissions." } ``` This script defines a function setup_user that creates a new user with specified groups. It checks if the user already exists, and if not, generates a password, creates the user, and stores the username and password in a secure file. It then creates any specified groups that do not already exist and adds the user to those groups. Finally, it sets up the user's home directory with the correct ownership and permissions. ``` if [ $# -eq 0 ]; then log_message "Usage: $0 <input_file>" exit 1 fi input_file=$1 log_message "Starting users and groups script." ``` This piece of code determines whether any command-line arguments are supplied ($# determines the number of arguments). It reports an error message showing the right usage and quits with a status of 1, signalling an error, if none are given ($# -eq 0). It logs a message signalling the beginning of a script for managing users and groups if an input file argument is given. ``` while IFS=';' read -r username groups; do setup_user "$username" "$groups" done < "$input_file" ``` This script reads a file line by line, expecting each line to have a group and a username separated by a semicolon (;). It invokes the setup_user method for each line, passing the groups and username as parameters. Presumably, the setup_user function adds the user to the selected groups and creates them. Until every line in the input file has been processed, this loop keeps going. **Running the script** To run the script, execute it with superuser privileges (as user creation requires root access): `sudo bash create_users.sh users.txt` Upon execution this script will create multiple users, multiple groups and set up their home directory To learn more and push your programming journey forward you can visit: https://hng.tech/internship or https://hng.tech/hire
mayowa_sodipo_794f2d64923
1,889,797
//build_space
i have to build something in 6 weeks.... i was thinking as linkdin have became a place of filth...
0
2024-06-15T19:59:07
https://dev.to/anushlinux/buildspace-3318
i have to build something in 6 weeks.... i was thinking as linkdin have became a place of filth where people brag about stuff which is quite literally un-braggable so why not make a community for people to rant out their problem with people on linkdin.... ik its not that good so i was thinking on making the user login system like sage ai... mixing ai with making the profile of a person is so fkin awesome making a community for devs in not a new idea and has been made by millions of people (maybe) so to it needs a usp (need to think on that) also making a site for people to rant about is also not new (hello twitter) and yes making a site for people who can rant about other people while being anonyms is also not a new thing and iirc the last company which made such a product like this was sued (the anonymous users were not really anonymous lol) so i need to think of something else (i will figure that out as i build this thing i do have 44 days lmao) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mflog3sg20clyoxn9fl.png)
anushlinux
1,910,832
shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? β€” Part 2.5
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the...
0
2024-07-03T20:50:24
https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-25-2fdg
javascript, opensource, nextjs, shadcnui
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI. In part 2.4, we looked at function getProjectType that returns the type of next project you are trying to install shadcn-ui via init command. Let’s move on to the next line of code. ![](https://media.licdn.com/dms/image/D4E12AQF-bqeNP5uugw/article-inline_image-shrink_1500_2232/0/1720039486490?e=1725494400&v=beta&t=al4ER2-Tvmn9POImZYBgTYxARCfx8NHNwPVHMH3e-6E) After finding out the projectType, the next step is to get the tailwindCssFile. ```js const tailwindCssFile = await getTailwindCssFile(cwd) ``` [getTailwindCssFile](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L135) is imported from [ui/packages/cli/src/utils/get-project-info.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L135) and this function returns the main css file. Let’s find out how. ```js export async function getTailwindCssFile(cwd: string) { const files = await fg.glob("\*\*/\*.css", { cwd, deep: 3, ignore: PROJECT\_SHARED\_IGNORE, }) if (!files.length) { return null } for (const file of files) { const contents = await fs.readFile(path.resolve(cwd, file), "utf8") // Assume that if the file contains \`@tailwind base\` it's the main css file. if (contents.includes("@tailwind base")) { return file } } return null } ``` fb.glob ------- ```js const files = await fg.glob("\*\*/\*", { cwd, deep: 3, ignore: PROJECT\_SHARED\_IGNORE, }) ``` Check out the fast-glob docs about the [deep property](https://www.npmjs.com/package/fast-glob#deep). ![](https://media.licdn.com/dms/image/D4E12AQHT81CwfaDzcQ/article-inline_image-shrink_1000_1488/0/1720039486771?e=1725494400&v=beta&t=0ftOPK5Zqyb5wM656H9u0R-6cMBxBxm2EpY5b6gjpH0) You might be wondering what’s PROJECT\_SHARED\_IGNORE. Well, PROJECT\_SHARED\_IGNORE is an [array initiated at the top of file](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L24). ![](https://media.licdn.com/dms/image/D4E12AQELbJQUCBa-iA/article-inline_image-shrink_1000_1488/0/1720039486286?e=1725494400&v=beta&t=NYPP-sHczsjz6swM9jadJz2CkcKrWLXVfuYVHUv_zXk) Check out the docs for [ignore property.](https://www.npmjs.com/package/fast-glob#ignore) ![](https://media.licdn.com/dms/image/D4E12AQEiyM8TrUC3YQ/article-inline_image-shrink_1500_2232/0/1720039486400?e=1725494400&v=beta&t=YghsmuBrHkHMU6vYPDdeW3SpT1_0CukmXFgT4ncOIvE) How is main css file returned? ------------------------------ ```js for (const file of files) { const contents = await fs.readFile(path.resolve(cwd, file), "utf8") // Assume that if the file contains \`@tailwind base\` it's the main css file. if (contents.includes("@tailwind base")) { return file } } ``` There is a check contents.includes(β€œ@tailwind base”) which is based on an assumption described in a comment. Conclusion: ----------- Finding the main css file with tailwind base classes based on an assumption that the content of file includes @tailwind base is clever. Again, this getTailwindFile also uses fg.glob to get all the files in a given cwd to find the file that contains @tailwind base I don’t know if I will ever use such an implementation but for now I know that such a thing is possible with fast-glob. Comes handy when you are building a CLI like package. > _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/) About me: --------- Website: [https://ramunarasinga.com/](https://ramunarasinga.com/) Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/) Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga) Email: [[email protected]](mailto:[email protected]) [Build shadcn-ui/ui from scratch](https://tthroo.com/) References: ----------- 1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L80](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L80) 2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L135](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L135)
ramunarasinga
1,910,830
Automated User Management Script for Linux Systems
Nowadays managing user accounts properly is extremely important for SysOps professionals working in...
0
2024-07-03T20:46:32
https://dev.to/damilola_lawal_b415434987/automated-user-management-script-for-linux-systems-2olo
Nowadays managing user accounts properly is extremely important for SysOps professionals working in dynamic IT setups. The pro of user provisioning is that it saves time, ensures consistency , security across systems and improve efficiency since much of the process can be automated. In this article, we will discuss about bash script and how to create a bash script that can be used for creating and managing accounts of users on Linux servers. In this article we will be writing a script which that reads a text file containing the employee’s usernames and group names, where each line is formatted as "user;groups". This script will create users and groups as specified, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions to _var/log/user_management.log_ Additionally, it will store the generated passwords securely in _/var/secure/user_passwords.txt_. we will ensure error handling for scenarios like existing users by the script. - Each User will have a personal group with the same group name as the username, this group name will not be written in the text file - A user can have multiple groups, each group delimited by comma "," - Usernames and user groups are separated by semicolon ";" For example; 1. light; sudo,dev,www-data 2. idimma; sudo 3. mayowa; dev,www-data Where light is username and groups are sudo, dev, www-data Firstly, ``` #!/bin/bash # Check if script is run as root if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi ``` The #!/bin/bash line at the beginning of a script is known as the shebang. This shebang line tells the operating system which interpreter to use to execute the script. In this case, it specifies the Bash shell as the interpreter. The line if [ "$EUID" -ne 0 ]; checks if the script is being run as root. The EUID (Effective User ID) is used to determine the privileges of the executing process. The -ne operator means "not equal." Since the root user has an ID of 0, this condition checks if the current user is not root. If true, it prompts to run the script as root and exits. Next; ``` # Check if the input file is provided if [ -z "$1" ]; then echo "Usage: bash create_users.sh <name-of-text-file>" exit 1 fi ``` The script checks if a positional argument is provided. If not, it exits with a status of 1 and displays the message: "Usage: bash create_user.sh <name-of-text-file>," prompting the user to provide the correct argument. _The -z is a string value which means value has a length of zero_ and _"$1" indicate the first parameter after the script name($0)_ which is assigned a variable name INPUT_FILE. So it checks if the first positional parameter is empty. Next; ``` INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure log and password files exist and have the correct permissions touch $LOG_FILE chmod 644 $LOG_FILE mkdir -p /var/secure touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE ``` We will create the directory /var/secure and the file user_password.txt in it to securely store user passwords with permissions set to "600" (read and write only for the owner). Additionally, we will create the file management.log in the /var/log directory to log all events. The first positional parameter($1) is assigned INPUT_FILE as variable Next; ``` # Function to generate random password generate_password() { openssl rand -base64 12 } ``` The OpenSSL rand command will be used to generate a cryptographic password that is base64 encoded (strong and resistant as it contains letters, numbers, and characters). This function will generate the password, which will be called in the script. Next; ``` # Process the input file while IFS=';' read -r username groups; do # Create personal group for the user if ! getent group "$username" &>/dev/null; then groupadd "$username" echo "Created group $username" | tee -a $LOG_FILE fi ``` Let's process the input file; > IFS means Internal Field Separator; this is an environment variable which define a list of characters the Bash shell uses as field separators. These include space, tab, newline. Use the while condition to read the username and group taking note of the separators by the IFS. We will create a personal group for the user but before then we will check the system group database (/etc/group) using the _getent_ command to ensure the group is not present. The negate operator (!) is used to reverse the outcome of an expression. In this case, if the $group does not exist, the expression with ! will evaluate to true. This condition triggers the groupadd command to create the group if it doesn't already exist. ``` # Check if user already exists if id "$username" &>/dev/null; then echo "User $username already exists. Skipping..." | tee -a $LOG_FILE continue fi ``` If user already exist then our shell script will throw an error message indicating that user exist but we want to redirect such output( standard error or output) to the _/dev/null_ and the tee reads the output and save it silently in the $LOG_FILE. We then use the continue to create user if user does not exist. ``` # Create user and personal group useradd -m -s /bin/bash -g "$username" "$username" if [ $? -eq 0 ]; then echo "Created user $username with a personal group $username" | tee -a $LOG_FILE else echo "Failed to create user $username" | tee -a $LOG_FILE continue fi ``` Let's add create user and add to the personal group using the popular command _useradd_ and use the -m(to add a home directory for the user, -s (to set the shell to /bin/bash). If the user is created and added to the personal group successfully this make the exit status ($?) equal zero (0) and it print the success message "Created user $username with a personal group $username" to the $LOG_FILE and if not it execute the else statement. Next; ``` # Generate password and set it for the user password=$(generate_password) echo "$username:$password" | chpasswd if [ $? -eq 0 ]; then echo "$username,$password" >> $PASSWORD_FILE echo "Set password for $username" | tee -a $LOG_FILE else echo "Failed to set password for $username" | tee -a $LOG_FILE fi ``` Let's call the function to generate the password and set password for the user using chpasswd (change password) , write the username,password into the $PASSWORD_FILE and log it. > chpasswd is designed to process multiple username and password pairs from standard input (usually via a pipe | or a redirection <) . Next, ``` # Create additional groups if specified if [ -n "$groups" ]; then IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do # Create group if it doesn't exist if ! getent group "$group" &>/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $LOG_FILE fi fi ``` we will be creating other group which user will be by passing the $groups as an array and iterate over it by "${group_array[@]}" where "@" insinuate every character. The character -n is also similar to the -z only that it describe value which length is greater than zero We will check the system group database (/etc/group) using the getent command but will add a negate operator (!) which will negate the expression and makes it True, if the $group does not exist thereby causing it to create the group if it doesn't exist using the groupadd command as previously discussed. Next; ``` usermod -aG "$group" "$username" if [ $? -eq 0 ]; then echo "Added $username to group $group" | tee -a $LOG_FILE else echo "Failed to add $username to group $group" | tee -a $LOG_FILE fi done fi ``` we then add the user to group using the usermod -aG command Finally, ``` done < "$INPUT_FILE" echo "User creation process completed." | tee -a $LOG_FILE ``` we will close our while loop condition and pass the $INPUT_FILE to read lines from it. Create a directory, touch a file named create.sh (shell script) in the directory and make the file executable; ``` sudo mkdir -p ~/bash-script sudo chmod +x ~/bash-script/create_users.sh ``` open the create and paste this script; ``` #!/bin/bash #check if script is run as root if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi # Check if the input file is provided if [ -z "$1" ]; then echo "Usage: bash create_users.sh <name-of-text-file>" exit 1 fi INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure log and password files exist and have the correct permissions touch $LOG_FILE chmod 644 $LOG_FILE mkdir -p /var/secure touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE # Function to generate random password generate_password() { openssl rand -base64 12 } # Process the input file while IFS=';' read -r username groups; do # Create personal group for the user if ! getent group "$username" &>/dev/null; then groupadd "$username" echo "Created group $username" | tee -a $LOG_FILE fi # Check if user already exists if id "$username" &>/dev/null; then echo "User $username already exists. Skipping..." | tee -a $LOG_FILE continue fi # Create user and personal group useradd -m -s /bin/bash -g "$username" "$username" if [ $? -eq 0 ]; then echo "Created user $username with a personal group $username" | tee -a $LOG_FILE else echo "Failed to create user $username" | tee -a $LOG_FILE continue fi # Generate password and set it for the user password=$(generate_password) echo "$username:$password" | chpasswd if [ $? -eq 0 ]; then echo "$username,$password" >> $PASSWORD_FILE echo "Set password for $username" | tee -a $LOG_FILE else echo "Failed to set password for $username" | tee -a $LOG_FILE fi # Create additional groups if specified if [ -n "$groups" ]; then IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do # Create group if it doesn't exist if ! getent group "$group" &>/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $LOG_FILE fi usermod -aG "$group" "$username" if [ $? -eq 0 ]; then echo "Added $username to group $group" | tee -a $LOG_FILE else echo "Failed to add $username to group $group" | tee -a $LOG_FILE fi done fi done < "$INPUT_FILE" echo "User creation process completed." | tee -a $LOG_FILE ``` Create a file (input_file.txt) containing username;groups ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` Run the script and pass the input_file.txt as argument. Thank you for reading, to learn more kindly join the HNG internship programme to get your tech skill upgraded and land you dream job. Follow this link https://hng.tech/internship, https://hng.tech/hire hsh
damilola_lawal_b415434987
1,910,828
New in Rails 7.2: Active Model Got type_for_attribute
What's New? Ruby on Rails 7.2 brings a handy change. The type_for_attribute method is now...
0
2024-07-03T20:42:56
https://jetthoughts.com/blog/new-in-rails-72-active-model-got-typeforattribute-changelog/
rails, changelog
## What's New? Ruby on Rails 7.2 brings a handy change. The `type_for_attribute` method is now in Active Model. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilmccorzumrcqr8nxvre.png) ## What is type_for_attribute? It's a method that simplifies type checking, relieving you from the burden of manual checks for both Active Record and Active Model. ## How it Works Here is a simple example: ```ruby class MyModel Β Β include ActiveModel::Attributes Β Β attribute :my_attribute, :integer end MyModel.type_for_attribute(:my_attribute) # => #<ActiveModel::Type::Integer ...> ``` ## Real-Life Example Think of a signup form. You need to check the types of inputs. Here is how you can use it: ```ruby class SignupForm Β Β include ActiveModel::Attributes Β Β attribute :email, :string Β Β attribute :age, :integer Β Β def initialize(params) Β  Β Β @params = params Β  Β Β @params.each do |key, value| Β  Β  Β Β if self.class.type_for_attribute(key.to_sym) Β  Β  Β  Β Β send("#{key}=", value) Β  Β  Β Β else Β  Β  Β  Β Β raise "Unknown attribute type" Β  Β  Β Β end Β  Β Β end Β Β end end form = SignupForm.new(email: "[email protected]", age: "twenty") # raises "Unknown attribute type" if age is not an integer ``` ## Before the Change Before this update, you had to write custom code. You had to check types manually. ## After the Change Now, `type_for_attribute` makes it easy and saves you valuable time. Just include `ActiveModel::Attributes`. It makes your code cleaner and your development process more efficient. ## Why It Matters This change empowers you to catch errors, ensures data integrity, and is a small but powerful tool that puts you in control of your code. ## In Summary Ruby on Rails 7.2 simplifies type checking. Use `type_for_attribute` with the Active Model. It's quick and efficient. This change is a great addition. Give it a try in your next project!
jetthoughts_61
1,910,827
Automating User and Group Management with Bash Scripting.
In modern IT environments, efficient user and group management is key to maintaining security and...
0
2024-07-03T20:41:46
https://dev.to/simeon4real/automating-user-and-group-management-with-bash-scripting-51jg
bash, linux
In modern IT environments, efficient user and group management is key to maintaining security and operational integrity. Manually creating and managing users across multiple systems is time-consuming and very prone to errors due to its repetitiveness. Automation becomes a handy tool for handling repetitive tasks like creating and managing users, groups and permissions. This article will walk you through a Bash script that automates the processes of user and group creation, and group assignment based on a structured input file. The bash script also contains a logging mechanism for reviewing all operations of the script. ## Objective The script is named **create_users.sh**. It reads a text file containing usernames and their respective groups, creates users, assigns them to specified groups, generates random passwords securely for each user and stores them. It then logs all actions. ## Prerequisites To successfully follow this tutorial, you need the following: - A Linux machine with root privileges: The script requires administrative rights to create users, groups, and manage file permissions (sudo or root access). - Basic Knowledge of Bash commands and scripting: Familiarity with navigating the command line and editing scripts will help in understanding and customizing the provided script. - A text editor of choice (vi, nano, vim, etc): You'll use a text editor to modify the script and prepare input files. Choose one that you're comfortable with or have experience using. Overview The script automates the creation of users and groups by reading input from a text file formatted as username;groups. In a nutshell, the script does the following: - Reads a text file passed as an argument to the script at runtime. - Validates root privileges and the existence of a secure directory for password storage (/var/secure). - Creates usernames and assigns them to their respective groups specified in the input file. - Generates strong (16 digit), random passwords for each user and securely stores them in /var/secure/user_passwords.txt. - Logs all actions to /var/log/user_management.log for audit purposes. ## Writing our script. **Create script file** Open your terminal and create an empty file named `create_user.sh`. `touch create_user.sh` Using your preferred editor, open the `create_user.sh` file we just created. For this tutorial, we will be using vim text editor. feel free to use your editor of choice. `vim create_user.sh` **Add the shebang line** At the top of the script, add the [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix)). ``` #!/bin/bash ``` **Check if the script is run with root privileges.** In a linux system, creating users and groups, setting permissions, etc typically involve using administrative privileges. After the shebang line, add the following code. ```bash # Check if running as root if [[ $(id -u) -ne 0 ]]; then echo "This script must be run as root." exit 1 fi ``` The code check above `if [[ $(id -u) -ne 0 ]]; then` retrieves the user's id then it checks if the retrieved id does not equal to 0. If it does, it means the user running the script has admin privileges and continues to the next phase of the script. It terminates if the user does not have root privileges. ## Ensure /var/secure directory exists for storing passwords securely The "/var/secure" directory is used for typically used as a secure directory where sensitive information, such as user passwords, can be stored with restricted access. ```bash if [[ ! -d "/var/secure" ]]; then mkdir -p /var/secure chown root:root /var/secure chmod 700 /var/secure fi ``` If the `"/var/secure"` directory does not exist, it is immediately created with all permissions (read, write, execute) set for the user. ## Setting Up Our Variables Before proceeding, we need to set up some variables that would be reused in other parts of the script. ```bash # Variables INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" ``` - **INPUT_FILE** variable is set to $1 which is the text file that is passed as an argument to our script. - **LOG_FILE** variable is to the `user_management.log` in `/var/log/` and used for storing logs from our script. - **PASSWORD_FILE** variable is set to the `user_passwords.txt` file in `/var/secure/` directory which was created earlier by our script after initial checks if the directory doesn't exist. ## Creating The Generate Password Utility Function The script contains a utility function that act as a helper function for performing the task of generating a secure password which we would use in the core part of the script. **generate_password** This helper function helps to generate a strong a strong random password using the `/dev/urandom` linux kernel interface for generating random passwords. ```bash generate_password() { tr -dc 'A-Za-z0-9!@#$%^&*()_+=-[]{}|;:<>,.?/~' </dev/urandom | head -c 16 } ``` The above code generates a 16-digit random password using the Kernel's built-in random number generator. The generated password consists of uppercase letters (A-Z), lowercase letters (a-z), digits (0-9), and some special characters. User and Group Creation: This is the core part of the script. It reads each line from the input file ($1) to create users and assign them to specified groups and logs all action to the ```bash while IFS=";" read -r user groups; do # Remove whitespaces from user and groups user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) # Validate username before creating user if [[ -z "$user" ]]; then echo "Invalid username. Skipping user creation." continue fi # Check if user already exists if id "$user" >/dev/null 2>&1; then echo "User $user already exists. Skipping user creation." continue fi # Create a personal group for the user groupadd "$user" 2>/dev/null usermod -aG "$user" "$user" # Add user to specified groups IFS=',' read -ra group_list <<< "$groups" for group in "${group_list[@]}"; do group=$(echo "$group" | xargs) if [ -n "$group" ]; then groupadd "$group" 2>/dev/null usermod -aG "$group" "$user" fi done # Generate and store password securely password=$(generate_password) echo "$user:$password" >> "$PASSWORD_FILE" # Log actions echo "$(date +'%Y-%m-%d %H:%M:%S') Created user '$user' and added to groups: $groups" >> "$LOG_FILE" done < "$INPUT_FILE" ``` Detailed breakdown of the above code blocks: - **Read in the file** Firstly, we begin a while loop which is used to read in the file and set our IFS (Internal Field Seperator) to the semi-colon ";". `while IFS=";" read -r user groups; do`. The content of the text file is read in and it is then separated by ";" to denote which part of the content are user names and group names. - **Trim whitespaces** Next, we trim all whitespaces around the user names and group names using the xargs command-line utility. ```bash user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) ``` - **Validate users** We validate user names using two criteria 1. The user name is not empty: `if [[ -z "$user" ]];` 2. The user name does not already exist: `if id "$user" >/dev/null 2>&1; then` if the user name is empty or exists, it is skipped and the script continues. - Create user own groups We then create a group specifically for the user and adds the user to their groups. ```bash groupadd "$user" 2>/dev/null usermod -aG "$user" "$user" ``` - Add user to specified groups Here, we read in the text file again passed as an argument to the script but this time, the IFS is set to ','. This way, we can get the group names even if the user belongs to more than one specified groups. ```bash IFS=',' read -ra group_list <<< "$groups" for group in "${group_list[@]}"; do group=$(echo "$group" | xargs) if [ -n "$group" ]; then groupadd "$group" 2>/dev/null usermod -aG "$group" "$user" fi done ``` - Generate and Store passwords for each user Recall that we earlier created a `generate_password` helper function for our script and we can now use it to generate and store each user's password to the **$PASSWORD_FILE**. ```bash password=$(generate_password) echo "$user:$password" >> "$PASSWORD_FILE" ``` - Log actions After all operations, we add a new log entry to our log file for each user operation. ```bash echo "$(date +'%Y-%m-%d %H:%M:%S') Created user '$user' and added to groups: $groups" >> "$LOG_FILE" ``` - End the Loop `done < "$INPUT_FILE"` ## Run Our Script set execute permission for the `create_user.sh` file. ```bash chmod +x create_user.sh ``` Here is an sample user input file you can use for testing our bash script: `vim user_groups.txt` and paste the below text: ```bash nelson;cloudeng,sre victor;backend john;qa jane;dev,manager robert;marketing emily;design,research michael;devops olivia;design,research william;support sophia;content,marketing daniel;devops,sre ava;dev,qa ``` now run the script using: ```bash ./create_users.sh user_groups.txt ``` Checking Our Script Logs: ```bash cat /var/log/user_management.log ``` ## Conclusion Thanks for following me on this tutorial. This technical article was created as part of my Stage 1 task for HNG Internship. HNG is an internship that helps people hone and improve their tech skills. To learn more, visit the HNG Internship website: https://hng.tech/internship
simeon4real
1,909,516
Creating Blob Storage in Microsoft Azure: A Step-by-Step Guide
Blob storage is a fundamental service offered by Microsoft Azure for storing massive amounts of...
0
2024-07-03T20:41:12
https://dev.to/jimiog/creating-blob-storage-in-microsoft-azure-a-step-by-step-guide-4hg6
azure, microsoft, cloud, storage
Blob storage is a fundamental service offered by Microsoft Azure for storing massive amounts of unstructured data. This data can be anything from text files and images to videos and backups. In this blog post, we'll walk you through creating Blob storage in Azure, configuring access levels, and even adding some redundancy measures. ### Setting Up the Foundation: The Storage Account 1. **Finding Your Storage Account:** Log in to the Azure portal and navigate to the "Storage Accounts" section. You can either search for it directly or find it through the dropdown menu. ![Locating Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3vjer70685fema1z1t0.jpg) 2. **Clicking Create:** This button initiates the storage account creation process. ![Creating Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tt8n1wk0rjwqekv8tjou.jpg) 3. **Configuring the Storage Account:** Here's where you'll define some key details: * **Resource Group:** Create a new resource group or choose an existing one to organize your Azure resources. ![Configuring Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vscysfbr26r60qr67vp2.jpg) * **Storage Account Name:** Pick a unique and descriptive name for your storage account. Remember, this name needs to adhere to DNS naming conventions. * **Region:** Select a region where your data will be stored. Ideally, choose a region closest to your target audience for optimal performance. **Pro Tip:** Leave all the other settings on their default values for now. You can always customize them later based on your specific needs. 4. **Reviewing and Creating:** Once you've filled out the details, click "Review + create" to double-check your configuration. If everything looks good, proceed with the creation process. 5. **Accessing Your Resource:** After successful creation, click "Go to resource" to access your newly minted storage account. ### Enabling High Availability with RA-GRS 1. **Reaching the Data Management Section:** Within your storage account settings, locate the "Redundancy" section. This is typically found under "Data management." ![Locating Redundancy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/baavi8f8zvd1lvqdkg5g.jpg) 2. **Enabling Read-Access Geo-Redundant Storage (RA-GRS):** Select "RA-GRS" from the available redundancy options. This ensures your data is replicated across geographically separated locations, offering superior disaster recovery capabilities. ![Changing Redundancy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88dn2zdoc0y6tz5c1kmi.jpg) ### Granting Anonymous Access (with Caution) **Important Note:** Enabling anonymous access makes your blob storage publicly accessible. This is useful for scenarios where you want to share files directly through a web browser, but proceed with caution and only if necessary for your use case. 1. **Accessing Configuration Settings:** Navigate to the "Settings" dropdown menu and select "Configuration." 2. **Enabling Blob Anonymous Read Access:** Under "Public access settings," locate "Blob anonymous read access" and set it to "Enabled." Remember to keep other access settings to their defaults for now. 3. **Saving Changes:** Click "Save" to apply the configuration. ![Configuring the Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z9nrasujt1w2nrb0lvx5.jpg) ### Creating and Uploading to Blob Storage 1. **Reaching the Blob Storage Section:** In the left-hand menu of your storage account, navigate to the "Data storage" section and select "Containers." ![Locating the Storage Container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr85p2iscrcgyjs6n8t4.jpg) 2. **Creating a Container:** Click the "+ Container" button and provide a unique name for your container. Think of a container as a virtual folder within your blob storage. ![Naming the container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldkmp8pn9bes98a7ic9s.jpg) 3. **Change Access Level:** Navigate to the container and click on the "Change access level" button. Change the "Anonymous access level" to Blob (anonymous read access for blobs only). ![Changing the access level](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3pbefylvqq5q8ypfzcq.jpg) 4. **Uploading a File:** Navigate baack to the container and click the "Upload" button. Select the file you wish to upload from your local machine and click "Upload" again. ![Upload button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47a5rcrah2m4af3ay19f.jpg) ![Uploading a file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adtuq8k6i114dwwsitla.jpg) 5. **Testing Accessibility:** After the upload is complete, locate the uploaded file and click on it. This will generate a unique URL. Copy this URL and paste it into your web browser. If anonymous access is configured correctly, you should be able to access the file directly through the browser. ![Copying the public upload link](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbytq0wm7zw00c7xh9y2.jpg) ![Accessing the public file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8amrlb24bn28uzmi2pu.jpg) ### Optional Configurations: Soft Delete and Versioning While we've established the core functionalities of blob storage, here are a couple of optional configurations to consider: 1. **Enabling Soft Delete:** This feature allows you to recover accidentally deleted files for a certain period. Navigate back to the container overview and locate the "Blob service" section. Select "Soft delete" and define the retention period for deleted blobs. ![Locating Soft delete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wevyjej6m1n0wp3nxkk9.jpg) ![Changing the soft delete duration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6b7jdcllls3ym3j2fibk.jpg) 2. **Enabling Blob Versioning:** This keeps track of different versions of the same file uploaded over time. This can be helpful if you need to revert to a previous version of the file. Navigate back to the container overview and locate the "Blob service" section. Select "Versioning" and enable it. ![Enabling versioning](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06nn7slf99lx3ysviw3b.jpg) ### Cleaning Up Once you're finished experimenting, it's crucial to clean up your resources to avoid unnecessary charges. Remember to delete the storage account and any associated containers you created. This blog post has equipped you with the knowledge to create blob storage in Microsoft Azure, configure access levels, and implement basic data protection measures. By following these steps, you can leverage the power of Azure Blob storage for your cloud storage needs.
jimiog
1,902,995
Python is better than Javascript!
Now that I've caught your attention, let's discuss the real topic of this article. My name is Maya...
0
2024-07-03T20:40:59
https://dev.to/mayayr115/python-is-better-than-javascript-g2i
beginners, python, javascript, programming
Now that I've caught your attention, let's discuss the real topic of this article. My name is Maya Ramkishun, and I believe this statement is true for many reasons. As someone who started with programming in Python 8 years ago and is relearning it now, it's still as simple as I remembered. Here's what you need to know about Python and why you need to know it: ## Why Python? Python is the most versatile and robust coding language in Software Engineering. Python provides developers with various libraries and uses easy-to-read syntax to promote readability and organization, making it suitable for both beginners and professional programmers to use. Due to its flexibility and high demand in the tech industry, Python is used in a multitude of areas such as: * **Web Development** * **Data Science/Analytics** * **Machine Learning and AI** * **Automation and Scripting** * **Software Development** * **Scientific Computing and Research** * **Cybersecurity** * **Education** * **Internet of Things (IoT)** * **Finance and FinTech** * **and much more!** Because of its wide range of usage, Python was ranked the #1 programming language in various programming language popularity indices such as the [TIOBE Index](https://www.tiobe.com/tiobe-index/), [PYPL](https://pypl.github.io/PYPL.html), and [Stack Overflow Developer Survey](https://survey.stackoverflow.co/). This proves how large and active the Python community is, contributing to its rich ecosystem of libraries, frameworks, tools, and versatility in the Software Engineering world. ## How to Professionally Python! Python, like all the programming languages, has its upsides and downsides. Let's take a look at why programmers might and might not consider adding Python to their skill set: ![Maya's Python Syntax Sampler](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sw2idre3ph7nqj13qnak.png) In my code snippet, I provided some lines of code you'd commonly see in Python. For those of you who know JavaScript, you can already see drastic differences, differences which demonstrate why Python is better for beginners versus JavaScript. For those of you who aren't familiar with JaveScript syntax, let's talk about them: ### Outputting to the Console In JavaScript, we use `console.log` to display responses in the console, like this: ![JavaScript console.log](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wezftpg57kh12lbjx77r.png) In Python, we use `print`, like this: ![Python print](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyool9b9n7p10xfz8wsw.png) Both statements have the same functionality, just different syntax. Not much of a major difference here, but what about the others? ### Variables Here's how we would initialize and use JavaScript variables: ![JavaScript Variables](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50dfx63bn0q6ayq8bwjc.png) However in Python: ![Python Variables](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhk9zpxdyevro47oyxso.png) Ah ha! Here, we notice that JavaScript variables are required to be initialized with a keyword, and those keywords determine whether the variable can be reassigned, while Python does not have any keywords. ### Data Types JavaScript and Python have very similar data types but with different names. Here are some JavaScript data types: ![JavaScript Data Types](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bprr8hlav9n9bhmwgj7q.png) Along with some Python ones: ![Python Data Types](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/knpetb0qmeogmpr1sedb.png) See what I'm talking about now? ### Functions As for functions, they look drastically different in both languages. You can define functions in two ways in JavaScript: ![JavaScript Functions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vh2brjl2o171i3iakqso.png) And with parameters: ![JavaScript Function Parameters](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snwl2ranb9umzsjf3r98.png) In Python, there's only one way to define a function: ![Python Function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpm5wctysmha2f8d6dsx.png) And with arguments: ![Python Function Arguments](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w3xdsoeg86wbxqn7btej.png) ## Conclusion In conclusion, there are many things you can accomplish with both languages, however, we can see which of the two is largely preferred among the majority of software engineers. It seems like I was right after all, but every engineer has their preferences. The programming world is our oyster, and we can choose any language or tool to create and discover new possibilities.
mayayr115
1,895,331
Creating a Storage Account in Microsoft Azure
A fundamental component of many Azure solutions is storage. Azure Storage Accounts provide highly...
0
2024-07-03T20:40:38
https://dev.to/jimiog/creating-a-storage-account-in-microsoft-azure-3k04
azure, devops, cloudstorage, microsoft
A fundamental component of many Azure solutions is storage. Azure Storage Accounts provide highly durable and scalable storage for a variety of data objects, including blobs, files, tables, and queues. In this blog post, we'll walk through deploying a basic Storage Account in Azure, ensuring it meets some baseline security best practices. **Why Create a Storage Account?** Storage Accounts are the backbone for many Azure services. You can leverage them for: * **Static website hosting:** Store your website's HTML, CSS, and JavaScript files in blob storage. * **Cloud applications:** Store application data, user uploads, and other assets. * **Backup and archive:** Securely store backups of your on-premises data. * **Big data analytics:** Store large datasets for processing with Azure Databricks or other analytics tools. **Deployment with Security in Mind** While this guide focuses on a basic deployment, it incorporates a few key security considerations: * **Secure transfer:** We'll enable encryption for data transferred to and from the Storage Account. * **Access restrictions:** While public access allows for easy demonstration purposes, we'll discuss disabling it for production environments. ### Steps to Deploy a Storage Account: Now, let's get started with deployment! 1. **Locate the Storage Account:** * In the Azure portal, navigate to the **Storage Accounts** section in **Azure Services**. Alternatively, search for "Storage Account" in the search bar. ![Locating Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rwfg1rtc4mrwafeuy6l.jpg) 2. **Create Storage Account:** * Click **Create Storage Account**. ![Creating Storage Accountt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aglggnucczm1fnit6gpk.jpg) 3. **Configure Settings:** - **Resource Group:** Choose an existing resource group or create a new one to organize your Azure resources. ![Create Resource Group](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iww6j8w4kg8r8o81445z.jpg) - **Storage Account name:** Provide a unique globally identifiable name (3-24 characters, lowercase letters and numbers). - **Location:** Select a region geographically close to your primary audience for optimal performance. - **Account tier:** Leave the defaults for this demonstration. More advanced configurations can be explored based on your specific needs. - **Review+Create:** Once satisfied with your selections, click "Review+Create" to validate and deploy the Storage Account. ![Configuring Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ppmo2ac13l56uqzjto85.jpg) 4. **Access Redundancy (Optional):** * By default, Azure Storage offers redundancy options to protect your data from hardware failures. For this demonstration, we'll leave redundancy settings at the default. However, in production environments, consider utilizing a more robust redundancy option like Geo-redundant Storage (GRS) for optimal data protection. ![Accessing Redundancy Settings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/be0hzklrvylulomrv732.jpg) ![Configuring Redundancy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noa914zqak2dwxrnmjns.jpg) 5. **Enable Secure Transfer:** * In the left menu, navigate to **Settings** > **Configuration**. ![Locating Configuration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bpxsmkda8zo0ir3z34je.jpg) * Locate **Secure transfer required** and ensure it's set to **Enabled**. This enforces encryption for all data transfers. * Choose a TLS version (e.g., 1.2) for the encryption protocol. * **Save** your changes. ![Changing the Configurations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ta3t8yp3scb7pknzxb4.jpg) **Important Note:** Public access to the Storage Account is enabled for demonstration purposes only. In production environments, disable public access and implement more granular access controls using Azure Active Directory (AAD) for authorized users and applications. ![Locating Networking](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vovx01f5dl4w8ikkf1sd.jpg) ![Configuring the Network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77jz2vxfxg03s6sya56t.jpg) **Cleaning Up:** Once you've finished exploring your Storage Account, remember to delete it to avoid unnecessary charges. This guide provides a foundational understanding of deploying a Storage Account in Azure with a focus on secure data transfer. Remember to adapt security configurations based on your specific use case!
jimiog
1,910,820
Do Caos Γ  Clareza: Explorando a Modelagem de Software
Ao longo dos anos, notei que muitos dos problemas que enfrentamos em diferentes projetos e empresas...
0
2024-07-03T20:40:31
https://dev.to/douglasmedeiros/do-caos-a-clareza-explorando-a-modelagem-de-software-31d
productivity, braziliandevs, softwaredevelopment, ptbr
Ao longo dos anos, notei que muitos dos problemas que enfrentamos em diferentes projetos e empresas tΓͺm uma origem comum: a falta de clareza na modelagem de software. Softwares confusos, difΓ­ceis de entender e sem documentação adequada sΓ£o uma constante. Esse cenΓ‘rio se agrava ainda mais com a chegada de novos desenvolvedores nos projetos, tornando o ambiente de trabalho estressante e frustrante. Na minha experiΓͺncia, lidar com cΓ³digos desorganizados e labirΓ­nticos Γ© uma realidade que torna nossos dias mais complicados e menos produtivos. Esse caos nas equipes e projetos, no entanto, pode ser mitigado com o uso de uma modelagem bem documentada e uma organização interna mais eficaz. Ao adotar essas prΓ‘ticas, conseguimos transformar um ambiente hostil em um lugar de trabalho mais claro e produtivo. > Este artigo nasceu da minha palestra "Do Caos Γ  Clareza: Explorando a Modelagem de Software". Recentemente, tive a oportunidade de apresentΓ‘-la pela terceira vez, desta vez no evento PHP Velho Oeste, em maio de 2024. Palestrar e participar ativamente da comunidade foi uma experiΓͺncia extremamente gratificante, e isso me deu um empurrΓ£ozinho para criar uma nova talk (~~vou-coloca-aqui-quando-decidir~~), para seguir contribuindo com temas que acredito que possam fazer a diferenΓ§a. EntΓ£o como uma forma de concluir este ciclo, decidi escrever este artigo dedicado ao conteΓΊdo base da palestra. Nele, vou explorar as ideias e prΓ‘ticas que compartilhei sobre a importΓ’ncia da modelagem de software, e como essa prΓ‘tica pode transformar nossos projetos de software, trazendo clareza e eficiΓͺncia para nossos times. > ## A Modelagem de Software Eu gosto de partir de um ponto em comum, entΓ£o se a gente procurar sobre modelagem de software, comumente na literatura vamos achar a seguinte frase: > A modelagem de software Γ© construir modelos que explicam as caracterΓ­sticas ou o comportamento de um software, oferecendo uma visΓ£o abstrata. > Ao ler isso, a definição pode parecer tΓ©cnica e distante do nosso dia a dia. No entanto, vamos simplificar. Basicamente, modelar software Γ© desenhar nosso sistema. Significa criar representaçáes visuais que explicam como o software deve funcionar e como suas diferentes partes se conectam e se comunicam. Esses modelos se tornam um "mapa" que orienta a construção do software, ajudando desenvolvedores e stakeholders a entender melhor o projeto. Esses modelos ou desenhos nΓ£o precisam ser necessariamente tΓ©cnicos, mas podem servir como referΓͺncia para discussΓ΅es entre as equipes, mesmo que simplificados. > UML Class Box https://wiki.eclipse.org/File:Eclipselink_uml_class_diagram.gif > Isso nΓ£o significa que a UML ou outros tipos de modelagem nΓ£o sejam importantes. Elas, assim como todos os padrΓ΅es e referΓͺncias, tΓͺm seus casos de uso e propΓ³sitos. No entanto, a ideia aqui Γ© mostrar que podemos comeΓ§ar com algo mais simples que nos ajude a nos organizar e melhorar nosso fluxo de desenvolvimento, conectando pontos e caixas em diagramas de referΓͺncia de nossos fluxos. ![https://www.heflo.com/pt-br/modelagem-processos/exemplos-de-diagramas-bpmn/](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ds4z5g2opor4lwuk6zb4.png) ## Mas, por que desenhar? Quando perguntamos "por que desenhar?", muitas vezes ouvimos respostas como "Γ© perda de tempo" ou "nΓ£o vai ajudar muito". Essas respostas sΓ£o semelhantes ao que ouvimos sobre testes unitΓ‘rios. Ao longo dos meus anos de desenvolvimento, participei de diversos projetos que foram verdadeiros caos. Houve onboardings que literalmente demoraram mais de trΓͺs meses. Γ€s vezes porque a senha da VPN demorava para chegar, outras vezes porque o ambiente era tΓ£o confuso e mal documentado que subir o projeto parecia um quebra-cabeΓ§a que vocΓͺ desvendava enquanto conversava com metade da empresa. Infelizmente, cenΓ‘rios assim sΓ£o comuns, e esse caos se reflete no dia a dia. Muitos desses projetos demandavam alta carga cognitiva para compreensΓ£o. NΓ£o temos como absorver 100% da base de cΓ³digo sem uma orientação clara, apenas lendo cΓ³digo fonte. Eram mΓ©todos em cima de mΓ©todos, formando uma verdadeira teia de aranha. Isso Γ© muito frustrante e saΓ­mos do dia com aquela sensação de estresse. Para resolver um problema, muitas vezes precisamos falar com trΓͺs ou quatro pessoas. A informação estΓ‘ toda distribuΓ­da entre cabeΓ§as especΓ­ficas da empresa. Todo mundo sabe que fulano Γ© quem entende disso ou beltrano Γ© quem sabe daquilo. Esses problemas sempre me soam como um **_Software Orientado a Fofoca_**. ![Do Caos Γ  Clareza | Slide 14](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4tw9aoqb9wf6ve2w7b0o.png) Esse tipo de sistema Γ© um clΓ‘ssico, infelizmente. Geralmente, ele pode ser identificado por alguns pontos. Temos aquele sΓͺnior de 10 anos de empresa, que Γ© considerado sΓͺnior nΓ£o necessariamente por suas habilidades, mas porque estΓ‘ hΓ‘ 10 anos na empresa. Ele se torna o orΓ‘culo dos desenvolvedores, a referΓͺncia para qualquer dΓΊvida ou problema. Isso cria uma dependΓͺncia perigosa, pois se ele sair um dia, sua ausΓͺncia serΓ‘ sentida por um longo tempo nas equipes. Outro ponto que identifica um software desse tipo Γ© a comunicação boca a boca como forma de documentação. NΓ£o somos robΓ΄s; se eu perguntar a mesma coisa vΓ‘rias vezes para a mesma pessoa, ela responderΓ‘ de maneira diferente a cada vez. O problema surge quando uma empresa nΓ£o tem uma organização interna, pois cada equipe tem uma ideia diferente de como o sistema funciona. Com cada novo membro, as coisas mudam ainda mais. Sem um ponto central para identificar ou estudar como as coisas funcionam, comeΓ§am a surgir bugs fantasmas. Muitas regras de negΓ³cio sΓ£o simplesmente esquecidas e sΓ³ lembramos que existem quando causam problemas em produção. A modelagem de software oferece uma solução prΓ‘tica para esses problemas. Ela cria uma linguagem comum e uma visΓ£o compartilhada que facilita a comunicação e a colaboração entre todos os membros do time. Desenhar o software nΓ£o Γ© uma perda de tempo; Γ© um investimento que poupa tempo e reduz a frustração no longo prazo. Com uma modelagem clara, podemos transformar a confusΓ£o em clareza, documentar de maneira eficaz e criar um ambiente onde todos compreendem o sistema da mesma forma. Isso nΓ£o sΓ³ melhora a produtividade, mas tambΓ©m contribui para um ambiente de trabalho mais saudΓ‘vel e colaborativo, onde o conhecimento Γ© compartilhado e todos podem contribuir de maneira mais eficaz. ## Por Onde ComeΓ§ar Eu gosto de falar que apenas comece, simples assim. Acredito que modelar seu sistema Γ© parecido com aprender uma linguagem de programação: vocΓͺ vai lΓ‘, faz o "Hello World" de qualquer forma e, conforme vai sentindo as dificuldades, curiosidades ou necessidades, vai se aprofundando e estudando mais a fundo. Eu sempre digo: apenas comece, simples assim. Acredito que modelar seu sistema Γ© semelhante a aprender uma linguagem de programação. VocΓͺ comeΓ§a com o bΓ‘sico, faz o seu "hello world" e, conforme vai percebendo as dificuldades, curiosidades ou necessidades, vocΓͺ se aprofunda e estuda mais. O processo de modelagem tem trΓͺs etapas primordiais: inΓ­cio, processamento e fim. Embora eu nΓ£o queira focar em padrΓ΅es aqui, Γ© importante termos uma linguagem comum para facilitar nosso entendimento. Nestas trΓͺs etapas, podemos representar o inΓ­cio como uma bolinha branca, o quadrado para uma ação e o cΓ­rculo preto para finalizar. ![Do Caos Γ  Clareza | Slide 18](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ex3y6dcujleupctt9a6w.png) Seja qual for, um worker que irΓ‘ ler uma mensagem e processΓ‘-la, ou um cron job que realizarΓ‘ uma rotina e dispararΓ‘ alguns eventos, todo e qualquer fluxo sempre terΓ‘ um inΓ­cio, um processamento e um fim. Isso pode comeΓ§ar com uma solicitação ou entrada de dados e, apΓ³s executar o fluxo necessΓ‘rio, chegarΓ‘ a uma conclusΓ£o ou finalização. Tenha em mente que "***o fluxo precede o sistema***". A empresa ou cliente nunca vai, do nada, pensar em um sistema. Quando surge uma demanda ou uma ideia de software, Γ© para abraΓ§ar, automatizar ou implementar algum fluxo existente. Por isso, Γ© fundamental entender que todo e qualquer fluxo pode ser desenhado. ### O Problema da Notificação de UsuΓ‘rio Vamos explorar um exemplo prΓ‘tico para entender melhor a importΓ’ncia da modelagem de software: o problema da notificação de usuΓ‘rio. Imagine que sua equipe precisa implementar um sistema para enviar notificaçáes de e-mail aos usuΓ‘rios. Parece simples, certo? Mas, sem um desenho claro, essa tarefa pode se tornar um verdadeiro pesadelo. Primeiro, vamos visualizar o fluxo bΓ‘sico da notificação de usuΓ‘rio. ![Do Caos Γ  Clareza | Slide 19](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3elrj1lb3iq2fm5ko338.png) Primeiro, lemos as mensagens de uma fila chamada "user_notify". Depois, verificamos se hΓ‘ mensagens nessa fila. Se nΓ£o houver mensagens, o fluxo termina. Caso haja mensagens, iteramos sobre cada uma delas e enviamos um e-mail para o respectivo usuΓ‘rio. Um desenho desse fluxo torna tudo mais claro. A ideia e a implementação nΓ£o ficam mais dispersas nas mentes dos desenvolvedores, mas sim materializadas no desenho. Isso permite que todos possam visualizar, discutir pontos especΓ­ficos, identificar melhorias e prever possΓ­veis problemas de forma muito mais eficaz. ### O Desenho Γ© vivo, como o sistema Imagine que, apΓ³s implementar o fluxo, comeΓ§am a surgir erros no sistema. Sem um desenho claro, identificar a origem do problema pode ser complicado. No entanto, com o diagrama em mΓ£os, podemos rapidamente verificar cada etapa do fluxo e localizar onde o erro estΓ‘ ocorrendo. Por exemplo, se a exceção estΓ‘ sendo lanΓ§ada porque o corpo da mensagem estΓ‘ vazio, podemos ver que faltou uma etapa de validação antes do envio do e-mail. ![Do Caos Γ  Clareza | Slide 20](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u1p1e2zh4rxwqctop0gq.png) Esses problemas podem ser identificados e resolvidos durante as sessΓ΅es de entendimento e desenho. Muitas vezes, apΓ³s horas de discussΓ£o, alguΓ©m olha para o fluxo e percebe um gap, algo que pode evitar um bug em produção. Nesse contexto de erros, podemos voltar ao desenho e atualizΓ‘-lo junto com o software. Manter o desenho atualizado Γ© ΓΊtil para consultas futuras e para que outros membros da equipe possam entender se o fluxo antigo mudou ou nΓ£o. Lembre-se que seu software Γ© vivo; dificilmente ele ficarΓ‘ sem receber melhorias ou novas funcionalidades. A menos que ele seja muito especΓ­fico ou nΓ£o esteja sendo usado, o software estΓ‘ em constante evolução. Da mesma forma, seu desenho deve ser mantido atualizado. Isso Γ© fundamental. Ele nΓ£o apenas documenta o sistema atual, mas tambΓ©m serve como guia para futuras adaptaçáes. Com uma modelagem clara, qualquer desenvolvedor novo na equipe pode rapidamente entender como o sistema funciona e contribuir efetivamente. Mesmo que o fluxo ganhe novas formas de envio, como Telegram, ou novas tratativas de erros, podemos voltar ao desenho e discutir a melhor forma de implementar essas mudanΓ§as. ![Do Caos Γ  Clareza | Slide 22](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90kjevjmlhpgvb9yojtv.png) ### Nem SΓ³ de Software Vive o Desenho Quando comecei a escrever a palestra que apresentei, tinha tantas coisas que queria falar que ficou difΓ­cil criar um roteiro coeso. Foi aΓ­ que usei o [miro.com](http://miro.com/) para criar um mapa mental dos assuntos. Fui organizando as ideias atΓ© encontrar uma ordem que fizesse sentido e que transmitisse o que eu queria dizer. ![Mapa mental feito via miro.com](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7bpw4z2ndubompel7jw2.png) Mas essa tΓ©cnica vai alΓ©m do software. Minha esposa, que Γ© psicΓ³loga, usa diagramas de emoçáes com seus pacientes. Enquanto conversava com eles, ela desenhava as relaçáes entre as pessoas e os sentimentos que surgiam. Esses diagramas ajudavam a visualizar a intensidade e o impacto de cada emoção. Fiquei surpreso e comeΓ§amos a conversar sobre como essa tΓ©cnica a ajudava no trabalho dela. Se vocΓͺ prestar atenção, verΓ‘ que todas as profissΓ΅es tΓͺm alguma forma de representação visual, fluxo ou desenho. E na nossa Γ‘rea de desenvolvimento de software, nΓ£o Γ© diferente. Utilizar desenhos e diagramas Γ© uma maneira poderosa de organizar ideias, entender processos e comunicar informaçáes de forma clara e eficiente. ## ConsequΓͺncias De Um Desenho Ter desenhos e diagramas em projetos de software traz inΓΊmeros benefΓ­cios. Eles tornam a comunicação entre membros da equipe mais clara, facilitando a compreensΓ£o do sistema, especialmente para novos desenvolvedores. Visualizar o fluxo de trabalho ajuda a identificar possΓ­veis falhas antes da codificação, reduzindo bugs e problemas em produção. Manter diagramas atualizados junto com o cΓ³digo cria uma documentação viva que reflete o estado atual do sistema, facilitando a manutenção e evolução. Documentação visual permite que novos desenvolvedores entendam rapidamente o funcionamento do sistema e localizem funcionalidades principais. AlΓ©m disso, diagramas padronizam prΓ‘ticas na equipe e facilitam a implementação de melhorias contΓ­nuas, permitindo revisΓ΅es regulares dos fluxos para identificar e ajustar pontos crΓ­ticos. ### Mais simples mesmo que Γ s vezes nΓ£o pareΓ§a Mesmo que inicialmente pareΓ§a trabalhoso criar e manter diagramas, a longo prazo, essa prΓ‘tica se revela extremamente benΓ©fica. A alternativa seria muito mais complexa, envolvendo a leitura extensiva da base de cΓ³digo para entender o sistema. Com diagramas, a compreensΓ£o se torna muito mais rΓ‘pida e eficiente. ## PadrΓ΅es e Ferramentas Existem vΓ‘rias ferramentas e padrΓ΅es que podem nos ajudar a desenhar e versionar nossos diagramas em equipe. Algumas das ferramentas mais populares incluem: - **Miro**: Excelente para criar mapas mentais e diagramas colaborativos. - **Lucidchart**: Γ“tima para diagramas de fluxo e UML. - **Draw.io**: Uma ferramenta gratuita e poderosa para criar diagramas. - **PlantUML**: Permite criar diagramas de UML atravΓ©s de cΓ³digo, facilitando a integração com sistemas de versionamento. ### Indo mais fundo Assim como em qualquer linguagem de programação, Γ  medida que vocΓͺ se aprofunda na modelagem de software, perceberΓ‘ a necessidade de padrΓ΅es mais complexos. Existem muitos padrΓ΅es de diagramas que podem ser extremamente ΓΊteis em diferentes contextos, como diagramas de estados, diagramas de banco de sequencia, entre muitos. - **Diagrama de Classes**: Mostra a estrutura do sistema em termos de classes e suas relaçáes. - **Diagrama de SequΓͺncia**: Detalha como os objetos interagem em um fluxo especΓ­fico. - **Diagrama de Estados**: Representa os estados pelos quais um objeto passa durante seu ciclo de vida. - **Diagrama de Atividades**: Ilustra o fluxo de trabalho ou as atividades dentro de um sistema. ## ConclusΓ£o Modelar software nΓ£o Γ© apenas uma prΓ‘tica ΓΊtil; Γ© essencial para criar sistemas claros, eficientes e de fΓ‘cil manutenção. Ao adotar a modelagem de software, transformamos ambientes caΓ³ticos em espaΓ§os de trabalho organizados e produtivos. Desenhar nΓ£o Γ© uma perda de tempo, mas um investimento valioso que facilita a comunicação, reduz erros, melhora a documentação e torna o onboarding mais fΓ‘cil. Com as ferramentas e padrΓ΅es corretos, podemos criar diagramas que evoluem com nosso software, garantindo que todos na equipe estejam sempre na mesma pΓ‘gina. Em ΓΊltima anΓ‘lise, a modelagem de software Γ© uma prΓ‘tica que transforma a maneira como desenvolvemos, entendemos e mantemos nossos projetos, trazendo clareza e eficiΓͺncia para o nosso trabalho diΓ‘rio. [douglasmedeiros.dev](https://douglasmedeiros.dev) [@aqueleMetal](https://twitter.com/aqueleMetal) [Slides da Palestra](https://docs.google.com/presentation/d/1EaRerRVi9y6xAyJebmKKZLj8Fi4EKPgsdLcIaMdyY3I/edit?usp=sharing)
douglasmedeiros
1,910,503
Automating the Management of Users and Groups Using a Bash Script in Linux.
As a SysOps engineer in a company, there is the occasional necessity to manage the creation of users...
0
2024-07-03T20:33:59
https://dev.to/lanky23/automating-the-management-of-users-and-groups-using-a-bash-script-in-linux-12bh
As a SysOps engineer in a company, there is the occasional necessity to manage the creation of users and groups, maximum efficiency can be attained by automating this process. To address this, I have developed a Bash script to create users and groups, set up home directories with appropriate permissions and ownership, generate random passwords, and log all actions. This article details the functionality and structure of the script, ensuring error handling for scenarios such as existing users, and providing comprehensive documentation and comments within the script. ## Script Overview - The creation of Bash script that reads a text file containing users and groups. - Creation of users and groups as specified. - Setting up home directories with appropriate permissions, generating random passwords, and logging all actions to file. - Securely storing generated passwords. ## Let's dive into the script Every Bash script begins with the shebang line. ``` #!/bin/bash ``` This indicates that the script should be run using the Bash shell. ## Directory and File Setup ``` mkdir -p /var/log /var/secure touch /var/log/user_management.log touch /var/secure/user_passwords.txt chmod 600 /var/secure/user_passwords.txt ``` This ensures that the directories `/var/log` and `/var/secure` exist. The `-p` flag allows the command to create parent directories as needed without raising an error if the directory already exists, and creates the log file user_management.log in `/var/log` and the password file user_passwords.txt in `/var/secure`. Having created the `/var/secure` directory. Using `chmod 600` I set the permissions for user_passwords.txt to be readable and writable only by the file's owner. ## Action Logging Function This defines a function `log_action` that appends a timestamped log entry to the user_management.log file. ``` log_action() { echo "$(date) - $1" >> "/var/log/user_management.log" } ``` ## User Creation Function - I declared the `create_user()` function, defining three variables: user (username), groups (additional groups), and password (which will be generated). ``` create_user() { user, groups, and password local user="$1" local groups="$2" local password ``` I proceeded to check if the user already exists using the 'id' command. If the user exists, log "$user already exists" and exit the function. ``` if id "$user" &>/dev/null; then log_action "User $user already exists." return fi ``` Next, I created a personal group for the user with the same name as the username using `groupadd "$user"`. ``` groupadd "$user" ``` ## Group Handling and Creation The script takes the list of additional groups and splits them into separate items in a `group_array`. Logs a message that lists all the groups the user will be added to. The script then goes through each group in the array, removes any extra spaces from the group name, and checks if the group already exists. If the group doesn't exist, it creates the group. ``` IFS=' ' read -ra group_array <<< "$groups" # Log the group array log_action "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi done ``` I created the user with a home directory `(-m)`, a default shell `(-s /bin/bash)`, and set the primary group to the personal group `(-g "$user")`. Logging the success or failure of the user creation. ``` useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log_action "User $user created with primary group: $user" else log_action "Failed to create user $user." return fi ``` Followed by, adding the user to the additional groups using usermod `-aG` and logging the action. This splits the additional groups into an array, logs them, trims whitespace, checks if each group exists, and creates any missing groups. ``` for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log_action "User $user added to groups: ${group_array[*]}" ``` **Random Password Generation & Granting Home Directory Permissions** I generated a 12-character random password using `/dev/urandom`, formatted it, and set it for the user using `chpasswd`. Appended the username and password to `user_passwords.txt` and stored them in the secure file. Setting the permissions for the user's home directory to be accessible only by the user changing ownership to the user. ``` password=$(</dev/urandom tr -dc A-Za-z0-9 | head -c 12) echo "$user:$password" | chpasswd echo "$user,$password" >> "/var/secure/user_passwords.txt" chmod 700 "/home/$user" chown "$user:$user" "/home/$user" log_action "Password for user $user set and stored securely." } ``` ## Main Script Execution This begins by checking if exactly one argument `<user_list_file>` is provided. If not, it prints usage instructions and exits. Then it checks if the provided 'User list file' exists. If not, it prints an error message and exits. For each line, it reads the username and groups, trims whitespace, replaces commas with spaces, and calls the `create_user()` function with the parsed values. Finally, it prints a message indicating that user creation is complete and where to find more information. ``` if [ $# -ne 1 ]; then echo "Usage: $0 <user_list_file>" exit 1 fi filename="$1" if [ ! -f "$filename" ]; then echo "Users list file $filename not found." exit 1 fi while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs | tr -d ' ') groups=$(echo "$groups" | tr ',' ' ') create_user "$user" "$groups" done < "$filename" echo "User created. Type cat /var/log/user_management.log for more information." ``` This is the combined script.⬇️ ``` #!/bin/bash # Create directory for user logs and passwords mkdir -p /var/log /var/secure # Create logs file and passwords file touch /var/log/user_management.log touch /var/secure/user_passwords.txt # Grant read and write permissions for file owner only chmod 600 /var/secure/user_passwords.txt # Action Logging function log_action() { echo "$(date) - $1" >> "/var/log/user_management.log" } create_user() { # Define three variables passed to the function: user, groups, and password local user="$1" local groups="$2" local password # Check if user already exists if id "$user" &>/dev/null; then log_action "User $user already exists." return fi # Create personal group for the user groupadd "$user" # Create additional groups if they do not exist IFS=' ' read -ra group_array <<< "$groups" # Log the group array log_action "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi done # Create user with home directory and shell, primary group set to the personal group useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log_action "User $user created with primary group: $user" else log_action "Failed to create user $user." return fi # Add the user to additional groups for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log_action "User $user added to groups: ${group_array[*]}" # Generate password and store it securely in a file password=$(</dev/urandom tr -dc A-Za-z0-9 | head -c 12) echo "$user:$password" | chpasswd # Store user and password securely in a file echo "$user,$password" >> "/var/secure/user_passwords.txt" # Set permissions and ownership for user home directory chmod 700 "/home/$user" chown "$user:$user" "/home/$user" log_action "Password for user $user set and stored securely." } # Check if user list file is provided if [ $# -ne 1 ]; then echo "Usage: $0 <user_list_file>" exit 1 fi filename="$1" if [ ! -f "$filename" ]; then echo "Users list file $filename not found." exit 1 fi # Read user list file and create users while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs | tr -d ' ') # Replace commas with spaces for usermod group format groups=$(echo "$groups" | tr ',' ' ') create_user "$user" "$groups" done < "$filename" echo "User created. Type cat /var/log/user_management.log for more information." ``` ## Testing The Script - To use this script, provide a user list file as an argument. Each line in the file should contain a username and a comma-separated list of groups (optional), separated by a semicolon. For example: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` - To run the script, use this command. `./script_name.sh user_list_file.txt`. For a non-root user, use `sudo ./create_users.sh user_list_file.txt` ## Conclusion The Bash script automates user management by creating users, managing groups, generating secure passwords, and logging actions. It ensures the secure handling of passwords and provides detailed logs for auditing purposes. This automation is critical for efficiently onboarding new users and maintaining system security and organization. To rapidly upskill and advance your tech career, visit https://hng.tech/internship or https://hng.tech/premium for more information.
lanky23
1,910,825
React and Vue: A Technical Analysis
In the world of front-end development, React and Vue.js stand out as two of the most popular...
0
2024-07-03T20:30:19
https://dev.to/tobilobacodes/react-and-vue-a-technical-analysis-3ok7
In the world of front-end development, React and Vue.js stand out as two of the most popular JavaScript frameworks. They each have different use cases and ecosystem, but they are both widely utilized to create contemporary, interactive web apps. This article will provide a technical comparison of React and Vue, highlighting their key differences and similarities to help developers make an informed choice. **Overview** **React** React is a JavaScript library developed by Facebook for building user interfaces, particularly single-page applications where data changes over time. React emphasizes a component-based architecture, where UI is broken down into reusable components. It uses a virtual DOM to efficiently update the UI in response to state changes. **Vue** Vue.js, often referred to simply as Vue, is an open-source JavaScript framework for building user interfaces and single-page applications. Vue is designed to be incrementally adoptable, meaning you can use as much or as little of it as you need. Vue's core library focuses on the view layer only, and it's easy to integrate with other libraries or existing projects. **Core feature** **React** 1. JSX Syntax: React makes advantage of JSX, a syntax extension that makes it possible to write HTML inside of JavaScript. The code is easier to read and write because to the combination of HTML and JavaScript. 2. Virtual DOM: A replica of the actual DOM is what React's virtual DOM is. React updates the virtual DOM first, compares it with the real DOM, and updates only the modified portions when an object's state changes. 3. Component-Based Architecture: React places a strong emphasis on building reuseable parts. Each component can be used to create sophisticated user interfaces, and it maintains its own state. 4. Unidirectional Data Flow: React follows a unidirectional data flow, making it easier to understand and debug applications. Data flows from parent to child components through props. **Vue** 1. Templates: Vue uses an HTML-based template syntax that allows binding the rendered DOM to the underlying Vue instance data. Vue’s template syntax is more intuitive for developers familiar with HTML. 2. Reactive Data Binding: Vue’s reactivity system allows for declarative rendering and two-way data binding. Changes to data automatically update the DOM. 3. Single-File Components: Vue components are encapsulated within single-file components (SFCs) that contain HTML, JavaScript, and CSS. This promotes component reusability and maintainability. 4. Transition Effects: Vue has built-in directives to handle entering, leaving, and list transitions. It provides hooks to integrate third-party CSS animation libraries seamlessly. **Performance** **React:** React is very effective for dynamic applications because of its virtual DOM diffing mechanism, which optimizes updates. Nevertheless, there may be overhead when using external libraries to manage state. **Vue** In addition to being very effective, Vue's reactivity system also produces code that is easier to comprehend and maintain thanks to its template syntax. In most cases, Vue's performance is on par with React. **Conclusion** React and Vue are two strong tools with different advantages and disadvantages that may be used to create web apps. Because of its greater ecosystem and high degree of flexibility, React is a great option for intricate applications that call for unique solutions. Conversely, Vue provides a more integrated approach and is easier to learn, making it a good choice for teams or smaller projects that require a quicker setup. The decision between React and Vue ultimately comes down to the particular requirements of your project, the level of knowledge your team has with the frameworks, and the ecosystem you want to operate in. When creating online applications that are both high-performing and maintainable, these technologies are great options. To learn more about these technologies, follow these links [HNG Internship](https://hng.tech/internship) or [HNG Premium](https://hng.tech/premium)
tobilobacodes
1,910,824
Beginner's Guide to Front-End Development: Building the Web of Tomorrow
Front-end development is the art of creating beautiful and interactive websites. Whether you're a...
0
2024-07-03T20:26:50
https://dev.to/ifecko77/beginners-guide-to-front-end-development-building-the-web-of-tomorrow-3ef5
webdev, beginners, tutorial, devops
Front-end development is the art of creating beautiful and interactive websites. Whether you're a complete beginner or have some coding experience, this guide will walk you through the essential steps to kickstart your journey as a front-end developer. 1. Understanding the Basics: HTML, CSS, and JavaScript To build a strong foundation, start by learning HTML, the language that structures web content. Next, dive into CSS to add style and design to your web pages. Then, explore JavaScript to bring interactivity and functionality to your creations. 2. Tools and Resources: Discover the various tools and resources that can enhance your front-end development workflow. From text editors and code editors to browser developer tools, find out which ones suit your needs and make your coding process more efficient. 3. Responsive Web Design: Learn the principles of responsive web design, which ensures that your websites adapt and look great on different devices and screen sizes. Explore media queries, flexible layouts, and mobile-first design to create user-friendly experiences. 4. Front-End Frameworks and Libraries: Discover popular front-end frameworks and libraries like Bootstrap, React, and Vue.js. These frameworks provide pre-built components and optimized code to speed up your development process. 5. Version Control and Collaboration: Learn about version control systems like Git and platforms like GitHub. All these tools allow you to track changes, collaborate with others, and manage your codebase effectively. 6. Accessibility and Performance Optimization: Understand the importance of creating accessible websites that can be used by everyone, regardless of their abilities. Learn techniques to optimize performance, such as minification, caching, and image optimization. 7. Building Real-World Projects: Put your skills into practice by building real-world projects. Start with small projects and gradually take on more complex ones. This hands-on experience will solidify your knowledge and showcase your abilities to potential employers or clients. Front-end development is an exciting field that combines creativity and technical skills. By following this guide and continuously learning and practicing, you'll be well on your way to becoming a skilled front-end developer. Embrace the challenges, stay curious, and enjoy the journey of building the web of tomorrow! I hope this article helps you kickstart your front-end development journey. Happy coding! πŸš€πŸ’»
ifecko77
1,910,822
New in Rails 7.2: Generated With .devcontainer
Exciting news from Rails! The latest Rails 7.2 has a new feature. When you create a new app, it now...
0
2024-07-03T20:23:35
https://jetthoughts.com/blog/new-in-rails-72-generated-with-devcontainer-changelog/
rails, changelog
Exciting news from Rails! The latest Rails 7.2 has a new feature. When you create a new app, it now generates a `.devcontainer` folder. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u76j5e4876kxiv4iz2nk.png) What's in the `.devcontainer`? ---------------------------- This folder has everything. It helps you start and work on your app. It's made for remote containers. Here's what's inside: 1. **Redis container**: For Kredis and ActionCable. 2. **Database**: You can choose SQLite, Postgres, MySQL, or MariaDB. 3. **Headless Chrome container**: For system tests. 4. **Active Storage**: Uses local disk and preview features. Why is this cool? ----------------- This feature is a time-saver, making the setup process a breeze. You get all you need in one go, which is great for development. Everything is ready to use - no more manual setup. Skipping parts -------------- Don't need some of these? No problem! With the `--skip-devcontainer` option, you have the power to skip what you don't need. You choose what's best for your app, putting you in control of your development process. Who benefits? ------------- Whether you're new to Rails or a seasoned developer, this feature is designed to make your life easier. If you're new, it will help you start faster. If you're experienced, it will save you time. It's all about making your life easier, no matter your level of expertise. Real-world use -------------- Imagine you're building a new app. You don't want to set up each part. Rails 7.2 does it for you. You get Redis, a database, Chrome, and storage ready. Start coding right away. Focus on what matters. Less setup, more development. Simple yet powerful ------------------- Rails 7.2 keeps it simple. Yet, it's powerful. This feature shows that. It's designed to make work easier. Whether you're a newbie or a pro, this helps. ### How to use it Create a new Rails app. Rails adds the .devcontainer. Need to skip it? Just add `--skip-devcontainer`. You're in control. In summary ---------- Rails 7.2 brings a handy feature. The .devcontainer folder has all you need. It's easy to set up and ready to go. Skip what you don't need. It's simple, fast, and powerful - perfect for all developers. This update is exciting. It's a small change, but it makes a significant impact. Enjoy coding with Rails 7.2! Happy coding!
jetthoughts_61
1,910,819
How to Manage the Overwhelming Information Online as a Beginner in HTML, CSS, and JavaScript
The internet is a vast storehouse of information. When you browse platforms like X, Instagram,...
0
2024-07-03T20:18:43
https://dev.to/george_kingi/how-to-manage-the-overwhelming-information-online-as-a-beginner-in-html-css-and-javascript-2cjp
The internet is a vast storehouse of information. When you browse platforms like X, Instagram, Facebook, Medium, and Dev.to and more, numerous people share valid opinions on learning, roadmaps to follow, abundant reading materials, and even quick money-making tips with minimal learning. This can become overwhelming, leaving you confused and unsure of where to start. Some individuals are more focused on gaining followers than providing genuine help. This article aims to help you relax, learn without pressure, and strategize on managing the overwhelming amount of information available online. As I write this article, I too aim to share valuable insights on how to handle the overwhelming amount of information available on the internet. I hope that this article will assist you, the reader, in understanding that its purpose is purely to help and not to add to the overwhelm. Here are a few strategies you can implement to manage the overwhelming amount of information available on the internet, especially as a beginner in HTML, CSS, and JavaScript. ### Start with a Structured Curriculum Select a dependable course or tutorial series that starts with the fundamentals and advances in a logical sequence. Free platforms such as freeCodeCamp, Codecademy, and MDN Web Docs provide comprehensive and well-structured learning paths. ### Set Clear Goals and Objectives It's essential to have a clear internal dialogue and precisely define your learning goals and objectives. Challenge yourself with questions using the 5 Ws (Why, When, Who, What, Where). For instance, consider whether you aim to build a personal website, grasp responsive design techniques, or establish a solid foundation in web development. ### Break Your Learning into Small Chunks Break down your learning into smaller, manageable segments. Begin by mastering HTML tags before progressing to CSS selectors and properties. ### Regular Practice Practice is key to mastery, so apply your knowledge by creating small projects. Whether daily or every few days, the goal is consistent practice. Start with simple tasks like building a basic webpage or styling a form. Getting overwhelmed is totally normal, Programming is not a walk to the park, so practice, practice and practice. ### Use Reliable Resources Stick to a few trusted resources to avoid information overload. Some recommended sites include: [MDN Web Docs](url) [W3Schools](url) [freeCodeCamp](url) https://www.theodinproject.com/ https://www.codecademy.com/ [Code in Place (stanford.edu)](url) ### Join a Community Engage actively with online communities like Reddit, local coding meetups, or Stack Overflow. Asking questions and joining discussions can offer emotional support and motivation especially when you feel overwhelmed. ### Keep a Learning Journal Keeping a daily record of what you learn acts as a second brain, as described in some books. This practice not only strengthens your understanding but also creates a valuable reference for revisiting concepts in the future. You can maintain a learning journal on your phone. ### Limit Time on Theory Finding a balance between studying theory and engaging in practical exercises is essential in coding. Hands-on experience is crucial; simply reading code isn't enoughβ€”you must actively code and continue learning through practice. ### Stay Patient and Persistent Learning to code demands patience as it unfolds gradually. Stay resilient and remain undeterred by challenges or setbacks along the way; persistence is the key to success. ### Take Breaks Don’t overwhelm yourself. Take regular breaks to rest and recharge. This helps in maintaining focus and productivity. ### Conclusion Embarking on the journey to learn HTML, CSS, and JavaScript involves navigating a vast sea of information. To overcome this challenge, it's crucial to adopt structured learning strategies, such as choosing reliable courses and breaking down learning into manageable chunks. Practice plays a pivotal role; building small projects regularly reinforces understanding and hone practical skills. Engaging with communities and maintaining a learning journal provide valuable support and encouragement. Remember, patience and persistence are paramountβ€”coding is a gradual process, and perseverance through challenges ensures steady progress toward mastering these essential web development skills.
george_kingi
1,910,817
Understanding Deep vs Shallow Copy in JavaScript: A Comprehensive Guide
When working with JavaScript, understanding the differences between deep and shallow copies is...
0
2024-07-03T20:17:17
https://dev.to/dev_habib_nuhu/understanding-deep-vs-shallow-copy-in-javascript-a-comprehensive-guide-1k09
javascript, programming, tutorial, webdev
When working with JavaScript, understanding the differences between deep and shallow copies is crucial for managing data structures effectively. Whether you're handling arrays, objects, or other data types, knowing how to copy them correctly can prevent unexpected behavior and bugs in your code. In this article, we'll explore what deep and shallow copies are, how they differ, and when to use each type. **What is a Shallow Copy?** A shallow copy of an object is a copy whose properties share the same references as those in the original object. If the object contains nested objects, the references to those nested objects are copied, not the nested objects themselves. This means that changes to the nested objects in the copy will also affect the original object. Example ``` const original = { a: 1, b: { c: 2 } }; const shallowCopy = { ...original }; shallowCopy.b.c = 3; console.log(original.b.c); // Output: 3 ``` In the example above, modifying `shallowCopy.b.c` also modifies `original.b.c` because `shallowCopy` and `original` share the same reference to the nested `object b`. **What is a Deep Copy?** A deep copy of an object is a copy that duplicates all levels of the original object's properties. This means that nested objects are also copied, not just their references. Changes to the deep copy do not affect the original object and vice versa. Example ``` const original = { a: 1, b: { c: 2 } }; const deepCopy = JSON.parse(JSON.stringify(original)); deepCopy.b.c = 3; console.log(original.b.c); // Output: 2 ``` In this example, modifying `deepCopy.b.c` does not affect `original.b.c` because `deepCopy` is a completely separate copy of `original`. Methods to Create Shallow Copies 1.Object Spread Operator ({...}): ``` const shallowCopy = { ...original }; ``` 2.Array slice Method: ``` const originalArray = [1, 2, 3]; const shallowCopyArray = originalArray.slice(); ``` 3.Object.assign method ``` const shallowCopy = Object.assign({}, original); ``` **Methods to Create Deep Copies** 1.`JSON.parse` and `JSON.stringify`: ``` const deepCopy = JSON.parse(JSON.stringify(original)); ``` **Note**: _This method has limitations, such as not handling functions or undefined properties._ 2.Recursive function ``` function deepCopy(obj) { if (obj === null || typeof obj !== 'object') return obj; const copy = Array.isArray(obj) ? [] : {}; for (let key in obj) { if (obj.hasOwnProperty(key)) { copy[key] = deepCopy(obj[key]); } } return copy; } ``` When to Use Deep vs Shallow Copy Shallow Copy: Use when you need a copy of an object where changes to nested objects should reflect in both the original and the copy. Suitable for simple objects without nested structures. Deep Copy: Use when you need a completely independent copy of an object, especially if it contains nested objects. Essential for complex data structures to avoid unintended side effects. Understanding deep and shallow copies is fundamental for JavaScript developers. It helps ensure data integrity and prevents bugs that arise from unintended shared references. By knowing when and how to use each type of copy, you can write more robust and maintainable code.
dev_habib_nuhu
1,910,816
Exploring Frontend Technologies
Exploring Frontend Technologies A Comparison of Svelte and SolidJS With the continues changing and...
0
2024-07-03T20:13:29
https://dev.to/louis_aaron_ded001218986f/exploring-frontend-technologies-4nf9
Exploring Frontend Technologies A Comparison of Svelte and SolidJS With the continues changing and innovation around frontend development, making the right choice in regards to technology can make your project a successful one or a failed one. While popular frameworks like React, Angular, and Vue dominate the landscape, there are some developing technologies worth exploring. In this article I will be talking about Svelte and SolidJSβ€”two important frontend frameworks that are gaining traction. I will be comparing their features, advantages, and use cases, and I'll also share my thoughts on ReactJS, which we use extensively at HNG. Svelte: The Disappearing Framework Svelte, was created by Rich Harris, and has been making waves in the frontend community for its exceptional approach. Unlike traditional frameworks that work in the browser, Svelte transfers much of the work to compile time. Here's what sets Svelte apart: Key Features 1. Zero Runtime: Svelte compiles codes into highly effective vanilla JavaScript at build time. This means no framework overhead during runtime, resulting in faster performance. 2. Reactivity: Svelte's reactivity model is intuitive and straightforward. You declare reactive variables with the `$: syntax, and Svelte takes care of the rest. 3. Component Scope Styles: CSS in Svelte components is scoped by default, avoiding the need for complex styling solutions. Advantages of Svelte 1. Performance: With no runtime overhead, Svelte apps are blazing fast. The compiled code is optimized for performance, making it ideal for high-speed applications. 2. Simplicity: Svelte's syntax is clean and minimal, reducing code complications and making it easier to learn and use. 3.Bundle Size: Svelte's compiled output is typically smaller than frameworks that include a runtime, reducing load times and improving user experience. Uses Svelte is perfect for projects where performance and simplicity are paramount. It's particularly well-suited for: Single-page applications (SPAs) Interactive dashboards Lightweight widgets SolidJS: Reactive without the Virtual DOM SolidJS, created by Ryan Carniato, takes a different approach to frontend development. It combines the best aspects of React and Svelte, offering fine-grained reactivity without a virtual DOM. Here's what makes SolidJS stand out: Important Features of SolidJS 1. Fine-grained Reactivity: SolidJS reactivity is inspired by reactive programming libraries. State changes trigger precise updates to the DOM, making it incredibly efficient. 2. No Virtual DOM: Unlike React, SolidJS doesn't use a virtual DOM. It updates the actual DOM directly, which can result in faster rendering times. 3.JSX Syntax: SolidJS uses JSX, a syntax familiar to React developers, making it easy to transition to SolidJS. Advantages 1. Efficiency: SolidJS's fine-grained reactivity ensures that only the necessary parts of the DOM are updated, leading to efficient rendering. 2. Developer Experience: The use of JSX and a React-like API makes SolidJS approachable for developers already familiar with React. 3 Performance: Direct DOM updates can outperform virtual DOM diffing in many scenarios, making SolidJS a high-performance option. Uses SolidJS excels in scenarios that require high performance and precise reactivity, such as: - Complex SPAs with dynamic interfaces - Real-time applications - Applications with heavy user interaction My Experience with ReactJS at HNG At HNG, we rely heavily on ReactJS for our projects. React's component-based architecture and extensive ecosystem have made it a go-to choice for building scalable and maintainable applications. The virtual DOM, while not as efficient as Svelte's compile-time approach or SolidJS's fine-grained reactivity, strikes a good balance between performance and developer experience. In my time at HNG, I've come to appreciate React's flexibility and the vibrant community that surrounds it. React hooks, in particular, have transformed the way we manage state and side effects, making our code cleaner and more modular. If you're interested in learning more about the HNG Internship and the exciting projects we work on, check out the [HNG Internship website] (https://hng.tech/internship). If you're looking to hire top-notch developers, don't miss out on [HNG Hire](https://hng.tech/hire). Conclusion Both Svelte and SolidJS offer compelling alternatives to traditional frontend frameworks. Svelte's compile-time approach and SolidJS's fine-grained reactivity present unique advantages that can significantly improve performance and developer experience. While React remains a powerhouse at HNG, exploring these emerging technologies can provide valuable insights and new opportunities for innovation.Whether you choose Svelte, SolidJS, or stick with React, the key is to understand the strengths and trade-offs of each framework and select the one that best fits your project's needs. Happy coding!
louis_aaron_ded001218986f
1,910,813
Top 10+ Wisata Kuliner di Tuban yang Terkenal Enak
Top 10+ Wisata Kuliner di Tuban yang Terkenal Enak Warung Mbak Narti Warung Soto Sleko Nasi Pecel...
0
2024-07-03T19:58:46
https://dev.to/alongwalker/top-10-wisata-kuliner-di-tuban-yang-terkenal-enak-2khj
[Top 10+ Wisata Kuliner di Tuban yang Terkenal Enak](https://id.alongwalker.co/top-10-wisata-kuliner-di-tuban-yang-terkenal-enak-s3363.html) 1. Warung Mbak Narti 2. Warung Soto Sleko 3. Nasi Pecel Madiun Jalan Pemuda 4. Es Bubur Sumsum Depan Toko Tetap Terang 5. Mak Mbing Opor 6. Ayam Panggang Tirtomoyo 7. Warung Cak Wi 8. Kari Rajungan Manunggal Jaya 9. Warung Bagong 10. RM Rahayu Kabupaten Tuban ternyata memiliki berbagai jenis kuliner khas yang terkenal enak untuk dicicipi. Inilah deretan tempat wisata kuliner hits di Tuban, Jawa Timur yang tidak boleh dilewatkan untuk menikmati sajian lezat. Kuliner khas suatu daerah memang tak boleh dilewatkan begitu saja. Salah satu yang bisa Anda kunjungi untuk menikmati kuliner adalah Tuban. Daerah satu ini memang terkenal akan sajian masakan khasnya yang sangat enak. Untuk itu, apabila Anda sedang berlibur di Tuban, ada baiknya Anda juga bisa mencicipi hidangan yang lezat dari tempat kuliner populer berikut ini. 1. Warung Mbak Narti [https://s1-id.alongwalker.co/wp-content/uploads/2024/07/image-top-10-wisata-kuliner-di-tuban-yang-terkenal-enak-cf03d7ba807270fd0107a1e848602d5d.jpg](https://s1-id.alongwalker.co/wp-content/uploads/2024/07/image-top-10-wisata-kuliner-di-tuban-yang-terkenal-enak-cf03d7ba807270fd0107a1e848602d5d.jpg) top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Tjiptomo Hidayat) Destinasi pertama yang bisa Anda kunjungi adalah Warung Mbak Narti. Jika dilihat dari namanya mungkin cenderung tidak menarik. Namun, ketika Anda melihat ragam menu yang disajikan oleh restoran ini tentunya akan membuat Anda ingin mampir. Rumah makan satu ini memiliki konsep tradisional modern. Ada banyak olahan seafood yang lezat bisa Anda temukan di tempat satu ini. sebut saja seperti kepiting, lobster dan juga rajungan bisa Anda nikmati saat berada di rumah makan ini. tidak hanya itu saja, olahan seafood tersebut dijual dengan harga yang sangat terjangkau. Untuk soal rasa, Anda tak perlu meragukannya lagi karena menu makanan disini memang sangat lezat. Lokasi: Jl. Manunggal, Keduran, Panyuran, Kec. Palang, Kab. Tuban, Jawa Timur. 2. Warung Soto Sleko top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Wawan Setyo) Hidangan soto memang menjadi menu favorit yang bisa Anda coba ketika sedang berada di kawasan yang ada di Indonesia. Seperti ketika Anda sedang menghabiskan waktu di Tuban, Anda bisa mencoba Warung Soto Sleko. Disini, Anda bisa menikmati lezatnya menu soto ayam dan juga ayam goreng. Anda pun bisa menemukannya dengan mudah di daerah Bundaran Sleko. Bagi yang tertarik mengunjunginya, Anda bisa datang mulai pukul 5 sore karena rumah makan ini baru buka pada jam tersebut. Menu soto ayam yang ada disini menjadi andalan bagi masyarakat setempat. Oleh karena itu, tak mengherankan jika tempatnya selalu ramai dipadati oleh pengunjung. Lokasi: Jl. Pahlawan No.7, Wire, Gedongombo, Kec. Semanding, Kab. Tuban, Jawa Timur. 3. Nasi Pecel Madiun Jalan Pemuda top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Tjun Sulestio) Meskipun Anda tidak berada di Madiun, namun ketika Anda berada di Tuban hidangan satu ini bisa Anda jadikan menu untuk sarapan. Berlokasi di sekitar Jalan Pemuda, pecel satu ini memang sangat ramai oleh pelanggan yang mengantri. Jika Anda tertarik, maka Anda pun bisa mengunjunginya mulai pagi hari. Hal ini dikarenakan tempat makan satu ini hanya buka dari pukul 6 pagi hingga siang hari saja. menu yang disediakan disini pastinya adalah pecel Madiun dengan bumbu kacangnya yang sangat khas. Apabila Anda berkesempatan makan di tempat satu ini, Anda pun bisa meminta rempeyek lebih banyak karena rempeyek disini memiliki rasa yang gurih dan renyah. Lokasi: Jl. Pemuda No.18, Sidomulyo, Kec. Tuban, Kab. Tuban, Jawa Timur. 4. Es Bubur Sumsum Depan Toko Tetap Terang top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Facebook.com @iin.sagita.5209 Kota Tuban memang dikenal sebagai kawasan yang panas. Hal ini dikarenakan lokasinya yang berada dekat pantai dan di sisi utara pulau Jawa membuatnya memiliki hawa yang panas ketika siang hari. Untuk itu, ketika Anda sedang haus dan gerah di kala siang, es bubur sumsum mungkin bisa menjadi salah satu hidangan yang wajib Anda coba. Bagi Anda yang sedang berada di Tuban, terdapat salah satu es bubur sumsum yang terkenal yakni letaknya di depan Toko Tetap Terang. Hidangan satu ini tidak hanya memiliki rasa yang segar melainkan cara pembuatannya masih menggunakan resep tradisional yang membuat cita rasanya sangat lezat. Ketika Anda mencicipi es bubur sumsum ini, pastinya haus dahaga yang Anda rasakan langsung menghilang. Es bubur sumsum ini pun masih menggunakan bahan bahan alami, bahkan untuk gulanya sendiri juga masih murni tidak memakai pemanis buatan. Tak mengherankan rasanya masih otentik dan pastinya lezat. Lokasi: Jl. Pemuda, Sidomulyo, Kec. Tuban, Kab. Tuban, Jawa Timur. 5. Mak Mbing Opor top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Liyek Ayoe) Jika Anda mengira bahwa opor ayam yang ada di rumah makan ini sama dengan opor ayam yang biasanya Anda konsumsi, tentunya salah besar. Sebab, Mak Mbing Opor memiliki menu opor ayam yang berbeda dari biasanya. Di rumah makan ini, pembuatan opor tersebut masih menggunakan resep tradisional. Hal inilah yang menyebabkan rasa dari opor ayam tersebut sangat enak dan berbeda dengan opor ayam yang lain. Jika biasanya Anda mendapati kuah dari opor ayam tersebut berwarna kuning, namun di Mak Mbing Anda akan mendapatkan kuah opor ayam dengan warna yang berbeda. Sebab kuah opor ayam disini memiliki warna kuning kemerahan. Sedangkan untuk jenis ayam yang biasa digunakannya adalah jenis ayam kampung bukanlah ayam potong. Sehingga kuah kaldu ayamnya pun akan memiliki rasa yang sangat gurih. Selain itu, cara memasak opor ini, ayamnya akan terlebih dahulu dibakar. Hal inilah yang menyebabkan rasa dari opor ayam satu ini berbeda. Lokasi: Jl. Sultan Agung No.23, Ronggomulyo, Kec. Tuban, Kab. Tuban, Jawa Timur. 6. Ayam Panggang Tirtomoyo top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Primawan Wisda) Ayam panggang menjadi salah satu hidangan khas yang banyak disukai oleh banyak orang. Jika Anda sedang berada di Tuban, wajib hukumnya untuk mencoba Ayam Panggang Tirtomoyo. Rumah makan ini tidak hanya menyajikan ayam dengan rasa yang lezat saja. Anda juga akan disuguhkan dengan menu ayam yang masih segar sehingga dagingnya pun akan terasa lebih gurih. Hal ini dikarenakan ayam bakar tersebut baru diolah ketika pengunjung memesan. Sehingga dibutuhkan kesabaran untuk bisa menyantap hidangan yang enak ini. ayam panggang yang dimasak dengan menggunakan bumbu pedas ini tentunya akan memiliki cita rasa yang unik dan berbeda. Oleh karena itu, bagi Anda penggemar ayam panggang, tentunya wajib datang ke rumah makan satu ini. ayam panggang yang lezat tersebut akan sangat pas apabila disantap dengan sepiring nasi hangat. Karena rasa ayamnya yang enak, rasanya satu piring nasi saja tidak akan cukup untuk Anda. Lokasi: Krajan, Bektiharjo, Kec. Semanding, Kab. Tuban, Jawa Timur. 7. Warung Cak Wi top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Fahrizal Ardiansyah) Rumah makan yang terbilang cukup sederhana ini juga wajib Anda kunjungi ketika sedang berada di Tuban. Namun jangan salah, meskipun bukan salah satu rumah makan yang mewah, Anda pun akan mendapati banyak pelanggan yang datang ke tempat satu ini. hal ini dikarenakan terdapat salah satu menu andalan yakni pepes belut. Jika Anda merupakan penggemar makanan pedas, sebaiknya Anda jangan sampai kelewatan untuk mencoba makanan satu ini. oleh karena itu, Anda pun bisa mampir dan menyantap hidangan ini dengan menggunakan nasi jagung. Kedua menu andalan tersebut dijamin akan membuat Anda ketagihan. Lokasi: Jl. Pemuda, Krajan, Sambonggede, Kec. Merakurak, Kab. Tuban, Jawa Timur. 8. Kari Rajungan Manunggal Jaya top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Adi Ginanjar) Saat Anda sedang berada di Tuban, jangan sampai melewatkan untuk mencoba rajungan. Hidangan satu ini memiliki rasa yang enak dengan cita rasa yang khas. Bahkan bagi Anda yang sedang mencari rumah makan khas yang telah tersohor, maka Kari Rajungan Manunggal Jaya adalah tempat yang bisa Anda tuju. Menu rajungannya yang enak ini tentunya bisa Anda santap dengan sepiring nasi hangat. Tak hanya itu saja, jika Anda tidak terlalu doyan dengan olahan menu rajungan, di rumah makan ini juga terdapat beberapa menu lain yang bisa Anda pesan. Sebut saja seperti ikan tongkol dan ikan pe yang juga menjadi favorit para pelanggan. Lokasi: Jl. Manunggal No.100, Keduran, Panyuran, Kec. Palang, Kab. Tuban, Jawa Timur 9. Warung Bagong top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Google Maps (Iwan Sujarwo) Pernah mencoba cicipi menu dengan bahan dasar kodok? Jika belum dan Anda termasuk tipe orang yang penasaran maka Warung Bagong adalah tempat kuliner yang tepat. Sebab, rumah makan satu ini menyajikan menu kodok dan belut yang super lezat. Biasanya untuk menu kodoknya akan dimasak dengan bumbu tauco. Sedangkan untuk menu belut di restoran ini akan dimasak dengan bumbu rica rica. Apabila Anda memang penggemar masakan pedas, maka olahan dengan menu rica rica akan menjadi menu yang wajib untuk Anda coba. Tak hanya itu saja, disini juga tersedia pilihan nasi jagung hangat yang bisa Anda nikmati. Untuk itu, bagi Anda yang ingin mencoba kelezatan daging kodok bisa mampir ke rumah makan satu ini. Salah satu rekomendasi menu yang wajib Anda coba adalah belut rica ricanya. Sebab, bumbu rica rica yang ada di restoran ini sangat menggoda. Tentunya akan membuat Anda ketagihan. Lokasi: Jl. Majapahit, Tegalagung, Dukoh, Kec. Semanding, Kab. Tuban, Jawa Timur. 10. RM Rahayu top 10+ wisata kuliner di tuban yang terkenal enak Image Credit: Foursquare.com Tempat wisata kuliner lainnya yang wajib Anda singgahi saat berada di Tuban adalah RM Rahayu. Disini, Anda akan menemukan banyak sekali makanan khas Tuban. Namun, menu yang paling populer dan harus Anda cicipi adalah Garang Asem. Garang asem yang dijual di rumah makan ini memiliki cita rasa yang berbeda dengan tempat lain. Hal ini dikarenakan RM Rahayu tidak menggunakan santan di dalam garang asemnya. Sehingga Anda pun bisa menikmati garang asem dengan kuah yang bening. Selain itu, ayam yang dimasak dengan garang asem tersebut juga memiliki daging yang sangat lembut, Anda pun wajib untuk mencobanya. Lokasi: Jl. Basuki Rachmad No.135, Doromukti, Kec. Tuban, Kab. Tuban, Jawa Timur. Beberapa tempat wisata kuliner di Tuban tersebut memang memiliki berbagai macam menu favorit atau andalannya masing-masing. Sehingga ketika Anda sedang berada di kabupaten ini, ada baiknya jika mencicipi salah satu menu yang khas tersebut. Hal ini dikarenakan Anda belum tentu bisa menemukannya di daerah lain.
alongwalker
1,910,620
Singleton Design Pattern
O padrão de design Singleton é um dos mais importantes e frequentemente utilizados na programação de...
0
2024-07-03T19:54:23
https://dev.to/rflpazini/singleton-design-pattern-1n51
designpatterns, go, softwaredevelopment, coding
O padrΓ£o de design Singleton Γ© um dos mais importantes e frequentemente utilizados na programação de software. Ele assegura que uma classe tenha apenas uma ΓΊnica instΓ’ncia durante o tempo de execução da aplicação e fornece um ponto de acesso global a essa instΓ’ncia. Neste artigo, discutiremos a importΓ’ncia do Singleton, como implementΓ‘-lo em Golang e os benefΓ­cios que ele traz, especialmente em ambientes concorrentes. ## O que Γ© o Singleton? O Singleton Γ© um padrΓ£o de design que restringe a instΓ’ncia de uma classe a uma ΓΊnica instΓ’ncia. Γ‰ particularmente ΓΊtil em situaçáes onde um ΓΊnico ponto de controle ou um ΓΊnico recurso compartilhado Γ© necessΓ‘rio, como: * Gerenciadores de configuração, onde as configuraçáes da aplicação precisam ser centralizadas. * Pools de conexΓ£o a banco de dados, onde um nΓΊmero limitado de conexΓ΅es deve ser gerenciado de forma eficiente. * Loggers, onde a consistΓͺncia dos registros Γ© crucial. ## Por que utilizar o Singleton? Vou listar alguns pontos sobre a implementação desde Pattern que fazem mais sentido e tambΓ©m para mostrar que nem tudo sΓ£o flores, alguns dos problemas que podemos ter com ele. ### Vantagens * ConsistΓͺncia Global: Garante que todos os pontos da aplicação utilizem a mesma instΓ’ncia, proporcionando consistΓͺncia de dados e comportamento. * Controle de Acesso: Centraliza o controle de criação e acesso Γ  instΓ’ncia, facilitando a manutenção e o gerenciamento do ciclo de vida do objeto. * EficiΓͺncia de Recursos: Evita a criação desnecessΓ‘ria de mΓΊltiplas instΓ’ncias, economizando recursos de memΓ³ria e processamento. ### Desvantagens * Dificuldade de Testes: Singletons podem tornar a escrita de testes unitΓ‘rios mais difΓ­cil, pois introduzem estados globais que precisam ser gerenciados. * Aumento do Acoplamento: O uso excessivo de Singletons pode levar a um acoplamento mais rΓ­gido entre componentes, dificultando a manutenção e evolução da aplicação. ## Implementando uma Singleton Para implementar uma singleton vou utilizar Golang. Nesta linguagem temos que ter uma atenção especial Γ  concorrΓͺncia para garantir que apenas uma instΓ’ncia seja criada, mesmo quando mΓΊltiplas goroutines tentam acessar a instΓ’ncia simultaneamente. Para deixar nosso exemplo mais prΓ³ximo do mundo real, vamos criar um Logger para nossa aplicação. Um logger Γ© uma ferramenta comum em aplicaçáes que precisa ser ΓΊnica para garantir a consistΓͺncia dos logs. ### 1 - Definindo a estrutura Primeiro, definimos a estrutura que queremos que tenha uma ΓΊnica instΓ’ncia. ```go package logger import ( "fmt" "sync" ) type Logger struct {} var loggerInstance *Logger ``` ### 2 - Implementando a função NewInstance A função `NewInstance` Γ© responsΓ‘vel por retornar a instΓ’ncia ΓΊnica da estrutura Singleton. Utilizamos um [mutex](https://go.dev/tour/concurrency/9) para garantir a seguranΓ§a em ambientes concorrentes, implementando a verificação dupla de bloqueio (double-checked locking) para eficiΓͺncia. ```go package logger import ( "fmt" "sync" ) type Logger struct{} var logger *Logger var mtx = &sync.Mutex{} func NewInstance() *Logger { if logger == nil { mtx.Lock() defer mtx.Unlock() if logger == nil { fmt.Println("Creating new Logger") logger = &Logger{} } } else { fmt.Println("Logger already created") } return logger } ``` ### 3 - Implementando os tipos de log Uma ferramenta de Log sempre tem alguns tipos de log, como por exemplo Info para apenas mostrar as informaçáes, Error para mostrar erros e assim por diante. Γ‰ uma forma de filtrarmos tambΓ©m o tipo de informação que queremos mostrar em nossa aplicação. EntΓ£o vamos criar um mΓ©todo que irΓ‘ mostrar nosso log com o tipo `Info`. Para isso vamos criar uma função que receberΓ‘ nossa mensagem de log e a formatarΓ‘ para o formato INFO. ```go package logger import ( "fmt" "sync" "time" ) const ( INFO string = "INFO" ) type Logger struct{} var logger *Logger var mtx = &sync.Mutex{} func NewInstance() *Logger { if logger == nil { mtx.Lock() defer mtx.Unlock() if logger == nil { fmt.Println("Creating new logger") logger = &Logger{} } } else { fmt.Println("Logger already created") } return logger } func (l *Logger) Info(message string) { fmt.Printf("%s - %s: %s\n", time.Now().UTC().Format(time.RFC3339Nano), INFO, message) } ``` ### 4 - Usando o Logger E para utilizar nosso novo logger, vamos instancia-lo dentro do nosso package main e criar um log para ver como funciona essa implementação. ```go package main import ( "playground-go/pkg/logger" ) func main() { log := logger.NewInstance() log.Info("This is an example of log") } ``` Esse Γ© o resultado quando executamos o programa: ```shell Creating new logger 2024-07-03T19:34:57.609599Z - INFO: This is an example of log ``` Se quisermos testar se o `NewInstance` estΓ‘ realmente garantindo que apenas teremos uma instancia rodando, podemos fazer o seguinte teste. ```go package main import ( "fmt" "playground-go/pkg/logger" ) func main() { log := logger.NewInstance() log.Info("This is an example of log") log2 := logger.NewInstance() log2.Info("This is another example of log") if log == log2 { fmt.Println("same instance") } else { fmt.Println("different instance") } } ``` Nossos logs mudaram e agora podemos ver que bloqueamos a criação de uma nova instancia: ```shell Creating new logger 2024-07-03T19:45:19.603783Z - INFO: This is an example of log Logger already created 2024-07-03T19:45:19.603793Z - INFO: This is another example of log same instance ``` ## ConclusΓ£o O padrΓ£o Singleton Γ© uma ferramenta poderosa para garantir que apenas uma instΓ’ncia de uma classe especΓ­fica exista durante o tempo de execução da aplicação. No exemplo do logger, vimos como esse padrΓ£o pode ser aplicado para garantir a consistΓͺncia dos logs em toda a aplicação. Espero que isso ajude vocΓͺ a entender melhor o Singleton em Golang.
rflpazini
1,910,811
Demo post
please no reading
0
2024-07-03T19:51:38
https://dev.to/notedbyneosahadeo/demo-post-21dj
please no reading
neosahadeo
1,910,635
MVP: The best Tech Stack
In todays blog post, I talk about the best tech stack for developing a Minimum Viable Product (MVP)....
0
2024-07-03T19:45:50
https://dev.to/raphael-dumhart/mvp-the-best-tech-stack-3994
startup, learning
In todays blog post, I talk about the best tech stack for developing a Minimum Viable Product (MVP). It is a highly opinionated post based on my experience founding two startups and working with many more. The focus of an MVP is to validate ideas with minimal effort and time. I emphasize using technologies that the development team is already familiar with to ensure rapid development and iteration. I discuss typical MVP team structures, ideal architectural choices like simple client-server models, and advise against complex architectures like microservices for MVPs. Testing strategies should prioritize functional tests and employ a pragmatic approach. Cloud computing is recommended for deployment due to its scalability and cost-efficiency. Moving from MVP to a maintainable product involves strategies like rewriting, refactoring, or incremental replacement using patterns like Strangler Fig. The key takeaway is that the MVP phase is about speed and validation, setting the foundation for a more robust product later. For more detailed insights, you can read the full post in [German](https://www.raphaeldumhart.at/blog/mvp-der-beste-tech-stack/). Disclaimer: This post was partly created with AI for summary and translation.
raphael-dumhart
1,910,633
Ultimate CSS Cheat Sheets and Blogs resources
Collection of CSS cheatsheet, prepare for CSS interviews, tips and tricks!! Under the...
0
2024-07-03T19:40:11
https://dev.to/shreyvijayvargiya/ultimate-css-cheat-sheets-and-blogs-resources-8h7
css, tailwindcss, cheatsheet, webdev
Collection of CSS cheatsheet, prepare for CSS interviews, tips and tricks!! ## Under the Hood "**Cheat sheet and of not much use**" this is my thinking when I was introduce to it for the first time. But this year in Jan when I was practising DSA again I've figured out the ultimate benefits of any cheat sheet in programming language.Cheat sheets are a kind of ready-to-use notes or overview making it easy for developers to prepare for interviews, and write code faster and better. ## CSS cheat Sheets ## Overapi cheatsheet website screenshotCSS properties [https://overapi.com/css](https://overapi.com/css) ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FScreenshot%202024-07-03%20at%205.55.09%E2%80%AFPM.png?alt=media&token=e7766de6-054f-43fb-8b6e-78534bc4f269) ## CSS 2 Cheat Sheet & Quick References ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FScreenshot%202024-07-03%20at%206.21.04%E2%80%AFPM.png?alt=media&token=b29ba412-0ccd-4af2-925d-734194f2d6de) [Website](https://cheatsheets.zip/css3) ## HTML and CSS cheatsheet In-Depth Tweet about HTML and CSS ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FScreenshot%202024-07-03%20at%205.54.37%E2%80%AFPM.png?alt=media&token=fd45aac3-d4af-482d-b9af-6126efedd03c) [Reference](https://htmlcheatsheet.com/css/?ref=ihatereading) ## CSS properties ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FGRd4kjFb0AM--V2.jpeg?alt=media&token=9c9cc9df-be5f-4978-8d1f-30f0aaacf2ed) [Reference](https://x.com/NanouuSymeon/status/1343694215604301824) ## Font Sizes Cheatsheet ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FEv8_ccgXcAAFm3B.png?alt=media&token=17e0c391-0758-4ae3-a4e9-2f3589b5cc9f) [Reference](https://x.com/denicmarko/status/1368882923978579975) ## CSS Alignments ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FEvfcYy7WgAMvWhW.jpeg?alt=media&token=64e658b7-e3a7-4aa1-93f1-a6f693dfec03) [Reference](https://x.com/Prathkum/status/1366803338776481793) ## Color Cheat Sheet ![](https://firebasestorage.googleapis.com/v0/b/ihatereading-4ba52.appspot.com/o/Karyams%2FQXq1wZdVU3X7pGJJjdiB2JSavdk1%2Fimages%2FEqXCqJdXAAMm3oN.png?alt=media&token=01f6cb5a-ae70-4676-a53f-c0aff47464ca) [Reference](https://x.com/Prathkum/status/1366803338776481793) ## Blogs on CSS Cheatsheet [CSS Selectors](https://www.freecodecamp.org/news/css-selectors-cheat-sheet-for-beginners/?ref=dailydev) [](https://www.freecodecamp.org/news/css-selectors-cheat-sheet-for-beginners/?ref=dailydev)[A Complete Guide to CSS](https://www.sitepoint.com/css-logical-properties-guide/) [Logical Properties](https://www.sitepoint.com/css-logical-properties-guide/) [](https://www.sitepoint.com/css-logical-properties-guide/)[50 cheatsheet guides and references](https://speckyboy.com/css-cheatsheets-resources-guides/) [](https://speckyboy.com/css-cheatsheets-resources-guides/)[Top 6 CSS cheatsheet](https://dev.to/devshefali/the-top-6-css-cheatsheets-that-will-save-you-hours-2lp1?ref=dailydev) [](https://dev.to/devshefali/the-top-6-css-cheatsheets-that-will-save-you-hours-2lp1?ref=dailydev)[Complete Flexbox tutorials](https://dev.to/joyshaheb/flexbox-cheat-sheets-in-2021-css-2021-3edl?ref=ihatereading) [](https://dev.to/joyshaheb/flexbox-cheat-sheets-in-2021-css-2021-3edl?ref=ihatereading)[Flexbox tutorials and properties](https://www.freecodecamp.org/news/css-flexbox-tutorial-with-cheatsheet/?ref=ihatereading) [](https://www.freecodecamp.org/news/css-flexbox-tutorial-with-cheatsheet/?ref=ihatereading)[CSS tutorials and guides](https://www.freecodecamp.org/news/css-grid-tutorial-with-cheatsheet/?ref=ihatereading) [](https://www.freecodecamp.org/news/css-grid-tutorial-with-cheatsheet/?ref=ihatereading)[CSS units cheatsheet](https://yurilee.hashnode.dev/css-units-are-confusing-af?ref=dailydev) [](https://yurilee.hashnode.dev/css-units-are-confusing-af?ref=dailydev)[CSS flexbox cheatsheet](https://aakanksha.hashnode.dev/css-flexbox-cheat-sheet?ref=ihatereading) [](https://aakanksha.hashnode.dev/css-flexbox-cheat-sheet?ref=ihatereading)[Tailwind CSS cheatsheet](https://flowbite.com/tools/tailwind-cheat-sheet/?ref=ihatereading) [](https://flowbite.com/tools/tailwind-cheat-sheet/?ref=ihatereading)[CSS flexbox cheatsheet](https://aakanksha.hashnode.dev/css-flexbox-cheat-sheet?ref=ihatereading) [](https://aakanksha.hashnode.dev/css-flexbox-cheat-sheet?ref=ihatereading)[CSS units cheatsheet](https://www.30secondsofcode.org/css/s/units-cheatsheet/?ref=ihatereading) [](https://www.30secondsofcode.org/css/s/units-cheatsheet/?ref=ihatereading)[CSS cheatsheet by Devhint](https://devhints.io/css) [](https://devhints.io/css)[Hackrio cheatsheet CSS](https://hackr.io/blog/css-cheat-sheet) [](https://hackr.io/blog/css-cheat-sheet)[CSS cheatsheet hostinger tutorials](https://www.hostinger.com/tutorials/css-cheat-sheet) [](https://www.hostinger.com/tutorials/css-cheat-sheet)[MDN docs for CSS](https://developer.mozilla.org/en-US/docs/Web/CSS/Reference) [](https://developer.mozilla.org/en-US/docs/Web/CSS/Reference)[CSS practical guide by Topotal](https://www.toptal.com/css/css-cheat-sheet) [](https://www.toptal.com/css/css-cheat-sheet)[CSS Cheatsheet comprehensive guide](https://codepa.dev/css-cheat-sheet/) [](https://codepa.dev/css-cheat-sheet/)[CSS cheatsheet single webpage](https://www.petefreitag.com/cheatsheets/css/) [](https://www.petefreitag.com/cheatsheets/css/)[Flex cheatsheet](https://yoksel.github.io/flex-cheatsheet/)[Guide to flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/) ## Github Repos [Cheatsheets-for-developers](https://github.com/AakashRao-dev/cheatsheets-for-developers)[Frontend Cheatsheet](https://github.com/logeshpaul/Frontend-Cheat-Sheets)[Huge Collection of Cheatsheet](https://github.com/JonnyBanana/Huge-Collection-of-CheatSheet)[CSS cheatsheet](https://github.com/AdamMarsden/css-cheat-sheet)[Tailwind CSS cheatsheet](https://github.com/umeshmk/Tailwindcss-cheatsheet) ## PDF doc [CSS cheatsheet PDF doc](https://websitesetup.org/wp-content/uploads/2019/11/wsu-css-cheat-sheet-gdocs.pdf)
shreyvijayvargiya
1,910,632
STAGE ZERO: BACKEND TRACK!
Python #WebScraping #Django Firstly my name is Daniel-Caleb Cheruiyot Ronoh and am a...
0
2024-07-03T19:38:51
https://dev.to/danielcaleb/stage-zero-backend-track-1f1d
#Python #WebScraping #Django Firstly my name is Daniel-Caleb Cheruiyot Ronoh and am a mobile and backend developer as well as a student. Recently, I tackled a project: creating APIs that scrape job data from a popular job site in Kenya, filter the necessary information, and store it in a database for easy access through various endpoints. This journey has been both challenging and enlightening, and it's precisely the kind of experience I look forward to expanding upon during the HNG Internship. My goal was to create an efficient solution that scrapes job data, filters out the essential information, and makes it easily accessible through a RESTful API built with Django and the Django REST framework. ## Step 1: Setting Up Django and Django REST Framework Next, I needed to set up a Django project and integrate the Django REST framework. I created a new Django project and app, then installed the necessary packages. ``` django-admin startproject job_scraper cd job_scraper django-admin startapp jobs pip install djangorestframework ``` I then updated the β€˜settings.py’ to include the REST framework and the new app. ``` INSTALLED_APPS = [ ... 'rest_framework', 'jobs', ] ``` ## Step 2: Scraping Job Data The first challenge was to scrape data from the job site. I chose Python for its robust libraries such as BeautifulSoup and Requests, which make web scraping straightforward. ## Step 3: Creating the Database Models I created a model to store the job information in the models.py file of the jobs app. ``` from django.db import models # Create your models here. from django.conf import settings from django.db.models.signals import post_save from django.dispatch import receiver from rest_framework.authtoken.models import Token @receiver(post_save, sender=settings.AUTH_USER_MODEL) def create_auth_token(sender, instance=None, created=False, **kwargs): if created: Token.objects.create(user=instance) class JobFunctions(models.Model): jobFunction = models.TextField(max_length=100) class JobIndustries(models.Model): jobIndustries = models.TextField(max_length=100) class JobLocation(models.Model): jobLocation = models.TextField(max_length=100) class JobImages(models.Model): jobImages = models.URLField(max_length=1000) class Jobs(models.Model): job_title =models.CharField(max_length=1000) scraped_date = models.CharField(max_length=1000) job_link = models.URLField(max_length=5000) Job_Image = models.ForeignKey(JobImages, related_name='Job_Images', on_delete=models.CASCADE) Job_Function=models.ForeignKey(JobFunctions, related_name='Job_Functions', on_delete=models.CASCADE) Job_Industries=models.ForeignKey(JobIndustries, related_name='Job_Industries', on_delete=models.CASCADE) Job_Location=models.ForeignKey(JobLocation, related_name='Job_Location', on_delete=models.CASCADE) Job_Details = models.TextField(max_length=1000, default='') def __str__(self): return self.scraped_date class JobDetails(models.Model): job = models.ForeignKey(Jobs, related_name='job_details', on_delete=models.CASCADE) details=models.TextField(max_length=20000) bold = models.BooleanField(default=False) I then ran the migrations to create the table in the database. python manage.py makemigrations python manage.py migrate ``` ## Step 4: Scraping Job Data ``` from django.shortcuts import render from .models import Jobs, JobFunctions, JobIndustries, JobLocation, JobDetails, JobImages from rest_framework import viewsets from .serializers import JobSerializer, JobFunctionSerializer, JobIndustriesSerializer, JobLocationSerializer, JobDetailsSerializer, JobImagesSerializer from datetime import datetime, timedelta from bs4 import BeautifulSoup import requests link = 'https://www.brightermonday.co.ke/jobs' page = requests.get(link) if page.status_code == 200: soup = BeautifulSoup(page.content, 'html.parser') # DROPDOWN MENU STARTS HERE !!! # SELECTION FOR JOB FUNCTIONS select_functions = soup.find('select', class_="w-full h-10 pl-2 text-gray-500 rounded-md border border-gray-300 hover:border-gray-400 focus:border-gray-400 placeholder-gray-400 focus:placeholder-gray-900 mb-3 w-full md:mb-0 md:mr-3") options_functions = select_functions.find_all('options') for option in options_functions: functions = JobFunctions() functions.jobFunction=option.get_Text() functions.save() # SELECTION FOR JOB INDUSTRIES select_industries = soup.find('select', class_="w-full h-10 pl-2 text-gray-500 rounded-md border border-gray-300 hover:border-gray-400 focus:border-gray-400 placeholder-gray-400 focus:placeholder-gray-900 mb-3 w-full md:mb-0 md:mr-3") options_industries = select_industries.find_all('options') for option in options_industries: industries = JobIndustries() industries.jobIndustries=option.get_Text() industries.save() # SELECTION FOR JOB LOCAIONS select_functions = soup.find('select', class_="w-full h-10 pl-2 text-gray-500 rounded-md border border-gray-300 hover:border-gray-400 focus:border-gray-400 placeholder-gray-400 focus:placeholder-gray-900 mb-3 w-full md:mb-0 md:mr-3") options_locations = select_functions.find_all('options') for option in options_locations: location = JobLocation() location.jobLocation=option.get_Text() location.save() # BASIC INFO ---JOB TITLE, JOB LINK, JOB DATE divs = soup.find_all('div',class_="mx-5 md:mx-0 flex flex-wrap col-span-1 mb-5 bg-white rounded-lg border border-gray-300 hover:border-gray-400 focus-within:ring-2 focus-within:ring-offset-2 focus-within:ring-gray-500") for job in divs: save=Jobs() job_title = job.find('div',class_="flex items-center").find('p',class_='text-lg font-medium break-words text-link-500').get_text().strip() job_link = job.find('div', class_="flex items-center").find('a', class_='relative mb-3 text-lg font-medium break-words focus:outline-none metrics-apply-now text-link-500 text-loading-animate')['href'] dates = job.find('div',class_="flex flex-row items-start items-center px-5 py-3 w-full border-t border-gray-300").find('p', class_='ml-auto text-sm font-normal text-gray-700 text-loading-animate').get_text().strip() job_image = job.find('img') if job_image: src = job_image.get('src') else: continue if dates is not None: save.scraped_date=dates # /* Job Functions, Details and Summary # /* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ */ job_response = requests.get(job_link) job_soup = BeautifulSoup(job_response .content, 'html.parser') Job_function_name = job_soup.find('div',class_='flex flex-wrap justify-start pt-5 pb-2 px-4 w-full border-b border-gray-300 md:flex-nowrap md:px-5').find('div',class_='w-full text-gray-500').find('h2',class_='text-sm font-normal').find('a').get_text(strip=True) job_search = job_soup.find('div', class_='mt-3') Job_location_name = job_search.find('a', class_="text-sm font-normal px-3 rounded bg-brand-secondary-50 mr-2 mb-3 inline-block").get_text(strip=True) industry_search = job_soup.find('div', class_='w-full text-gray-500') Job_industries_name = industry_search.find_all('div')[1].find('a', class_='text-sm font-normal px-3 rounded bg-brand-secondary-50 mr-2 mb-3 inline-block').get_text(strip=True) jobFunction, _ = JobFunctions.objects.get_or_create(jobFunction=Job_function_name) jobIndustries, _ = JobIndustries.objects.get_or_create(jobIndustries=Job_industries_name) jobLocation, _ = JobLocation.objects.get_or_create(jobLocation=Job_location_name) job_image = JobImages(jobImages=src) job_image.save() new_job = Jobs( job_title=job_title, scraped_date=dates, job_link =job_link, Job_Function = jobFunction, Job_Industries = jobIndustries, Job_Location = jobLocation, Job_Image = job_image ) new_job.save() # HERE WE SCRAP THE JOB DETAILS NESTED IN THE JOB LINK !!! jb_summary = job_soup.find('div', class_='py-5 px-4 border-b border-gray-300 md:p-5') if jb_summary.find('h3').get_text(): description=JobDetails() description.job=new_job description.details=jb_summary.find('h3').get_text() description.save() if jb_summary.find('p').get_text(): descriptio=JobDetails() description.job=new_job description.details=jb_summary.find('p').get_text() description.save() qualification = jb_summary.find('ul') if qualification: qualifications = qualification.find_all('li') for requirements in qualifications: description = JobDetails() description.job = new_job description.details=requirements.get_text() description.save() job_info = job_soup.find('div', class_='text-sm text-gray-500') for info in job_info: bold_tag =info.find('b') content=info.get_text() if bold_tag: job_detail = JobDetails(job=new_job, details=content,bold=True) else: job_detail = JobDetails(job=new_job, details=content,bold=False) job_detail.save() next_info = info.find_next_sibling() if next_info and next_info.name == 'ul': ul = info.find_next_sibling('ul') if ul: cont1 = '' for li in ul.find_all('li'): cont1 = li.text.strip() content = cont1 job_detail1 = JobDetails(job=new_job, details=content) job_detail1.save() from rest_framework.views import APIView from rest_framework.response import Response from rest_framework import authentication, permissions from django.contrib.auth.models import User class ListUsers(APIView): """ View to list all users in the system. * Requires token authentication. * Only admin users are able to access this view. """ authentication_classes = [authentication.TokenAuthentication] permission_classes = [permissions.IsAuthenticated] def get(self, request, format=None): """ Return a list of all users. """ usernames = [user.username for user in User.objects.all()] return Response(usernames) from rest_framework.authtoken.views import ObtainAuthToken from rest_framework.authtoken.models import Token from rest_framework.response import Response class CustomAuthToken(ObtainAuthToken): def post(self, request, *args, **kwargs): serializer = self.serializer_class(data=request.data, context={'request': request}) serializer.is_valid(raise_exception=True) user = serializer.validated_data['user'] token, created = Token.objects.get_or_create(user=user) return Response({ 'token': token.key, 'user_id': user.pk, 'email': user.email }) class JobDetailViewSet(viewsets.ModelViewSet): queryset=JobDetails.objects.all() serializer_class=JobDetailsSerializer class JobViewSet(viewsets.ModelViewSet): queryset = Jobs.objects.all() serializer_class = JobSerializer class JobFunctionViewset(viewsets.ModelViewSet): queryset = JobFunctions.objects.all() serializer_class = JobFunctionSerializer class JobIndustriesViewset(viewsets.ModelViewSet): queryset = JobIndustries.objects.all() serializer_class = JobIndustriesSerializer class JobLocationViewset(viewsets.ModelViewSet): queryset = JobLocation.objects.all() serializer_class = JobLocationSerializer class JobImageViewset(viewsets.ModelViewSet): queryset = JobImages.objects.all() serializer_class = JobImagesSerializer ``` ## Step 5: Creating the API Endpoints Using the Django REST framework, I created serializers and views to expose the job data through API endpoints. ``` from rest_framework import serializers from .models import Jobs, JobFunctions, JobLocation, JobIndustries, JobDetails, JobImages from datetime import datetime, timedelta from django.contrib.auth.models import User from rest_framework import serializers class UserSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = User fields = ['url', 'username', 'email', 'FirstName','LastName'] class JobDetailsSerializer(serializers.ModelSerializer): class Meta: model=JobDetails fields='__all__' class JobFunctionSerializer(serializers.ModelSerializer): class Meta: model=JobFunctions fields='__all__' class JobIndustriesSerializer(serializers.ModelSerializer): class Meta: model=JobIndustries fields='__all__' class JobLocationSerializer(serializers.ModelSerializer): class Meta: model=JobLocation fields='__all__' class JobImagesSerializer(serializers.ModelSerializer): class Meta: model=JobImages fields='__all__' class JobSerializer(serializers.ModelSerializer): Job_Function = JobFunctionSerializer() Job_Industries = JobIndustriesSerializer() Job_Location = JobLocationSerializer() Job_Image = JobImagesSerializer() job_details=JobDetailsSerializer(many=True) class Meta: model = Jobs fields = ['id', 'job_title', 'scraped_date','job_link', 'Job_Function', 'Job_Industries', 'Job_Location','Job_Image', 'job_details'] ``` I then set up the URLs for the API in urls.py. ``` from django.urls import path, include from .views import JobViewSet, JobDetailViewSet, JobFunctionViewset, JobLocationViewset, JobIndustriesViewset from rest_framework import routers rt = routers.DefaultRouter() rt.register(r'BrighterMondayJobs', JobViewSet, basename='brighter') rt.register(r'job_details', JobDetailViewSet) rt.register(r'job_functions', JobFunctionViewset) rt.register(r'job_locations', JobLocationViewset) rt.register(r'job_industries', JobIndustriesViewset) urlpatterns = [ path('', include(rt.urls)), # path('api/accounts/' , include('accounts.urls')) ] ``` And included the app's URLs in the project's urls.py. ``` """ URL configuration for myapp project. The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/4.2/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: path('', views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.urls import include, path 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) """ from django.contrib import admin from django.urls import path, include from scraper import urls as kim from scraper.views import ListUsers,CustomAuthToken # from rest_framework_simplejwt.views import ( # TokenObtainPairView, # TokenRefreshView, # ) urlpatterns = [ path('admin/', admin.site.urls), path('api/users/', ListUsers.as_view()), path('api/token/auth/', CustomAuthToken.as_view()), # path('api/token/', TokenObtainPairView.as_view(), name='token_obtain_pair'), # path('api/token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), path('', include(kim)) ] ``` Here's my github repo of the same [Github Link](https://github.com/daniel-caleb/Brighter_Jobs) Am glad to be part of the HNG Internship opportunity which I would recommend being part of them through [HNG internship](https://hng.tech/internship) or [HNG Hire](https://hng.tech/hire) You can reach out to me via [email protected] ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnukuuk6vd6rugnj6h8d.png)
danielcaleb
1,910,615
How To Create a Linux virtual machine in the Azure portal. by moses emeka
Sign in to Azure Create virtual machine Connect to virtual machine Install web server View the web...
0
2024-07-03T19:38:38
https://dev.to/emeka_moses_c752f2bdde061/how-to-create-a-linux-virtual-machine-in-the-azure-portal-by-moses-emeka-lfh
azure, linux, virtualmachine, cloud
[Sign in to Azure Create virtual machine Connect to virtual machine Install web server View the web server in action Clean up resources ](url) ## Applies to: βœ”οΈ Linux VMs Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user interface to create Azure resources. This quickstart shows you how to use the Azure portal to deploy a Linux virtual machine (VM) running Ubuntu Server 22.04 LTS. To see your VM in action, you also SSH to the VM and install the NGINX web server. If you don't have an Azure subscription, create a free account before you begin. Sign in to Azure ## Sign in to the Azure portal. ## **Create virtual machine** Enter virtual machines in the search Under Services, select Virtual machines. In the Virtual machines page, select Create and then Virtual machine. The Create a virtual machine page opens. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tu7qjz2660jd5j2g4r11.PNG) In the Basics tab, under Project details, make sure the correct subscription is selected and then choose to Create new resource group. Enter myResourceGroup for the name ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfwdg3kgjsuv1hcbbsc8.PNG) Under Instance details, enter myVM for the Virtual machine name, and choose Ubuntu Server 22.04 LTS - Gen2 for your Image. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kqpc35vejiu1n6ka1iw.PNG) Under Administrator account, select SSH public key In Username enter azureuser. For SSH public key source, leave the default of Generate new key pair, and then enter myKey for the Key pair name. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ccx8nu58i27rsk8l6fs.png) Under Inbound port rules > Public inbound ports, choose Allow selected ports and then select SSH (22) and HTTP (80) from the drop-down ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hpznk6dyfaf04yq8w5n.png) Leave the remaining defaults and then select the Review + create button at the bottom of the page. On the Create a virtual machine page, you can see the details about the VM you are about to create. When you are ready, select Create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgolyeijw5szrqu45v2x.PNG) When the Generate new key pair window opens, select Download private key and create resource. Your key file will be download as myKey.pem. Make sure you know where the .pem file was downloaded; you will need the path to it in the next step. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4n7sykx85rdaapbzf9u.PNG) When the deployment is finished, select Go to resource. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icjtip8kodtg9pbdrg6c.PNG) On the page for your new VM, select the public IP address and copy it to your clipboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gffrrdtysy43arq3ecao.PNG) ## Connect to virtual machine If you are on a Mac or Linux machine, open a Bash prompt and set read-only permission on the .pem file using chmod 400 ~/Downloads/myKey.pem. If you are on a Windows machine, open a PowerShell prompt At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the .pem with the path to where the key file was downloaded. ssh -i C:/users/EMMYFURTHER/Downloads/myKey.pem [email protected] ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5jnabrd92oq752ord1m.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svtghbn37uqjwf19ety1.PNG) ## Install web server To see your VM in action, install the NGINX web server. From your SSH session, update your package sources and then install the latest NGINX package ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i984ytc4sn8wng3cz38j.PNG) When done, type exit to leave the SSH session. ## View the web server in action Use a web browser of your choice to view the default NGINX welcome page. Type the public IP address of the VM as the web address. The public IP address can be found on the VM overview page or as part of the SSH connection string you used earlier ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwfavnrvtd0wj6ad3e3g.PNG) ## Clean up resources ## Delete resources When no longer needed, you can delete the resource group, virtual machine, and all related resources. 1. On the Overview page for the VM, select the Resource group link 2. At the top of the page for the resource group, select Delete resource group. 3. A page will open warning you that you are about to delete resources. Type the name of the resource group and select Delete to finish deleting the resources and the resource group.
emeka_moses_c752f2bdde061
1,910,630
Doesn't move across pages
https://codepen.io/codepcod/pen/OJeLLWj if you navigate through the menu, it says β€œno object”, the...
0
2024-07-03T19:27:26
https://dev.to/eva_smirnova_eb6512fdce19/doesnt-move-across-pages-14fi
https://codepen.io/codepcod/pen/OJeLLWj if you navigate through the menu, it says β€œno object”, the search bar says β€œabout.html”
eva_smirnova_eb6512fdce19
1,910,629
Avoid Candidates Who Hide Details
Understanding how a candidate plans to work is crucial when hiring remote workers. Have you ever...
0
2024-07-03T19:27:04
https://jetthoughts.com/blog/avoid-candidates-who-hide-details-startup-leadership/
startup, leadership, tutorial
Understanding how a candidate plans to work is crucial when hiring remote workers. Have you ever encountered a candidate who hesitates to share their work methods, perhaps citing: > I'm not a free consultant While it may sound witty, it's a red flag. Here's why. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9b4ritbvzwpywooylwn.png) Fear of Intellectual Theft? Not a Valid Excuse ---------------------------------------------- 1. **Standard Methods**: Most strategies are out there for all to see. They're not secrets. If candidates fear sharing, they may not know the methods at all. 2. **Prove Your Worth**: Sharing ideas shows they know their stuff. Why trust them to do it if they can't explain what they will do? Personal Traits? Not a Good Reason ---------------------------------- 1. **Team Player Needed**: How will they work with your team if they won't share now? You need someone who can explain their work clearly. 2. **Openness Matters**: A good worker should be open and transparent. Hidden methods mean hidden problems later. Creating Trust in Interviews ---------------------------- 1. **Open Discussion**: Good candidates talk freely about their work. It shows they trust you and understand your needs. 2. **Show Their Skills**: When they share, they show their skills. They prove they can turn plans into reality. Conclusion: Look for Openness ----------------------------- Don't ignore the red flags. A great candidate shares their methods. They show they are ready to work with you. They prove they can get the job done. Keep this in mind when you hire. By prioritizing open and transparent people, you can take control of your hiring process and ensure the best fit for your team. > Hiring the right person is vital. Ensure they show you how they will work and prove they know. This is not just about their methods but their ability to turn their plans into reality, giving you confidence in your hiring decisions.
jetthoughts_61
1,910,476
Como ser um Programador PragmΓ‘tico?
Uma das leituras que deveria ser obrigatΓ³ria para desenvolvedores, o Programador PragmΓ‘tico traz no...
0
2024-07-03T19:26:58
https://dev.to/loremimpsu/como-ser-um-programador-pragmatico-gek
softwaredevelopment, softwareengineering, development, developer
Uma das leituras que deveria ser obrigatΓ³ria para desenvolvedores, o Programador PragmΓ‘tico traz no seu primeiro capΓ­tulo algumas dicas de como ser um programador prΓ‘tico. ## 1. Preocupe-se com o seu trabalho O bΓ‘sico para ser um programador, vocΓͺ precisa se importar o mΓ­nimo do que seja com o desenvolvimento para executΓ‘-lo bem. ## 2. Reflita sobre o seu trabalho NΓ£o sai fazendo as coisas no piloto automΓ‘tico, conteste, fale, reflita sobre algo que esteja desenvolvendo. NΓ£o somente aceite o que vier e foda-se... ## 3. ForneΓ§a opçáes, nΓ£o dΓͺ desculpas esfarrapadas NΓ£o se limite a negar que algo nΓ£o pode ser feito por causa de uma tecnologia, um perΓ­odo de tempo ou algo que a sua capacidade nΓ£o permite. VΓ‘ atrΓ‘s de uma opção viΓ‘vel, mesmo que seja uma gambiarra, nem toda inovação nasceu para ser uma inovação, Γ s vezes elas sΓ£o apenas umas gambiarras que deram muito certo. Sempre procure uma saΓ­da viΓ‘vel ao problema antes de negar a se resolver. Caso necessΓ‘rio, peΓ§a ajuda a comunidade, se capacite, busque a melhor forma de resolver aquilo. ## 4. NΓ£o tolere "pequenos erros" No livro Γ© usado um exemplo de uma janela quebrada em um prΓ©dio que nΓ£o Γ© tratada como algo demais atΓ© que os moradores comeΓ§am a espalhar lixo, ou quebrar outras janelas. O exemplo de deixar pontas soltas no cΓ³digo, como cΓ³digo morto, gambiarras mal executadas, lΓ³gicas falΓ­veis. Trate sempre o pequeno sintoma para que ele nΓ£o vire um sintoma gigantesco. ## 5. Seja um catalisador de mudanΓ§as Se o projeto estΓ‘ uma zona, porque nΓ£o tentar melhorΓ‘-lo? ser o dev que se sujeita a dar dicas de como pode melhorar tal funcionalidade, ser proativo para corrigir uma "janela quebrada", trazer uma cultura de mudanΓ§a para o time. NΓ£o Γ© todo o projeto que jΓ‘ nasce estruturado e cresce estruturado. Alguns projetos sΓ£o apenas mantidos pelo seu valor de retorno, outros tentam adaptar-se a novas tecnologias ou necessidade. Antes de sair condenando cΓ³digos de anos de desenvolvimento, tente melhorar o que existe e traga isso presente no seu desenvolvimento. ## 6. Lembre-se do cenΓ‘rio em larga escala Em contraponto da mudanΓ§a, nΓ£o Γ© legal tentar impor a mudanΓ§a. Tente modificar aos poucos para que no final nΓ£o seja mais malΓ©fico do que benΓ©fico. O bom programador pensa sempre no futuro e de como aquele projeto pode vir a crescer, mesmo que ele nΓ£o seja projetado para isso, e pequenas mudanΓ§as podem criar um caos nesses casos. No livro se utiliza uma frase interessante para ilustrar o que pode vir a acontecer _buscando o melhor, geralmente prejudicamos o que Γ© bom._ , ilustra bem aqueles casos de devs que forΓ§am uma mudanΓ§a em algo que estΓ‘ satisfatΓ³rio no momento por querer chegar a um patamar de perfeição que nΓ£o virar a existir e acabam comprometendo semanas de desenvolvimento do produto. ## 7. Torne a qualidade parte dos requisitos Como saber quando o desenvolvimento estΓ‘ satisfatΓ³rio? uma mΓ©trica Γ© a qualidade. Mas como medir a qualidade? isso Γ© um ponto estratΓ©gico do desenvolvimento, existem tipos de qualidade, a qualidade de uso de produto que Γ© normalmente notada pelo usuΓ‘rio e a qualidade de desenvolvimento, que Γ© notada pelo desenvolvedor. Sua equipe e vocΓͺ tem que entrar em consenso sobre como irΓ£o trabalhar com isso. Nem sempre o desenvolvimento tem muita qualidade mas o produto Γ© incrΓ­vel para o usuΓ‘rio, classificar que o produto Γ© ruim pela sua qualidade de desenvolvimento nΓ£o rola. Vai ser uma boa discussΓ£o entre seus colegas. ## 8. Invista regularmente nos seus "conhecimentos" O livro utiliza o exemplo de um investimento financeiro. Um bom investidor diversifica seus investimentos, faz a manutenção da carteira gerenciando os riscos, investindo regularmente e se for o caso reestruturando seus fundos. O investimento do desenvolvedor Γ© em conhecimentos. Para se investir em conhecimento, hΓ‘ algumas dicas: ### Aprenda pelo menos uma nova linguagem todos os anos Mesmo que nΓ£o utilize para nada no seu projeto ou trabalho atual, aprender uma linguagem nova todo ano Γ© bom para se manter atento ao desenvolvimento atual e ter novas chances de trabalho em caso de demissΓ£o. AlΓ©m de que programar com paradigmas e linguagens diferentes ajuda no seu raciocΓ­nio como desenvolvedor de mΓ©todos de fazer algo de maneira nΓ£o convencional (gambiarras) ### Leia um livro tΓ©cnico a cada trimestre Livros da Γ‘rea sΓ£o investimentos a longo prazo e trazem ensinamentos de outros desenvolvedores com maior carga de experiΓͺncia que vocΓͺ. ### Leia livros nΓ£o tΓ©cnicos Ter uma boa capacidade de interpretação de texto ajuda no dia-a-dia do desenvolvedor. Sendo por saber interpretar uma task ou saber ler uma documentação, leitura sempre Γ© uma skill importante em qualquer Γ‘rea, alΓ©m de ser bem legal. ### Tenha aulas Outra maneira de receber informaçáes, cursos curtos com uma boa didΓ‘tica e que tenha professores legais te ajudam bastante a digerir melhor conteΓΊdos. ### Participe de grupos Trocar experiΓͺncia com outros desenvolvedores, mais seniores ou menos seniores Γ© sempre bom para a sua carreira. Converse, dΓͺ dicas, passe ensinamentos, participe da **comunidade dev**, sendo legal e sem fazer fuzuΓͺ, aprenda a trocar xp de uma maneira divertida. ### Experimente ambientes diferentes A dica Γ© sobre tecnologia, mas tambΓ©m pode ser sobre geografia (vΓ‘ viajar) . Variar ambiente de desenvolvimento vai te dar experiΓͺncia de trabalhar com a mesma tecnologia com algumas diferenΓ§as. NΓ£o se importe em sofrer um pouquinho, varie sempre o Sistema operacional, a IDE, o mΓ©todo de versionamento da stack. Varie. ### Mantenha-se informado Saiba o que tΓ‘ rolando dentro e fora da sua bolha. Procure sempre se manter atualizado sobre o que pode afetar tecnologias ou mercado, sobre quais novidades tal linguagem vai trazer na atualização, veja aquele evento de horas e horas que a big tech realiza pra trazer as novidades (que normalmente o pΓΊblico sΓ³ vai saber quando der alguma merda), seja ativo na busca de informaçáes. SΓ£o dicas simples de aprendizado que ajudam bastante nesse "investimento de conhecimento". ## 9. Analise criticamente o que vocΓͺ lΓͺ e ouve Dentro desse investimento de conhecimento que foi o ponto anterior, hΓ‘ o risco de ao buscar o conhecimento, vocΓͺ acabe entrando em vΓ‘rias groselhas de informação que nΓ£o agregam em nada. Pense um pouco, nΓ£o saia aplicando tudo o que leu ou ouviu de um livro (alΓ΄ clean code!). Tenha a criticidade de manter o que for bom para o seu momento de desenvolvimento e descarte o que for baboseira (inclusive esse texto aqui). ## 10. Γ© o que vocΓͺ diz e a maneira como diz Aprenda a se comunicar de forma clara e sem a necessidade de interpretes. Aprenda a passar informaçáes e a escutΓ‘-las, nΓ£o deixe de dar opiniΓ£o mas aprenda a como dar essas opiniΓ΅es de maneira a nΓ£o ofender a pessoa com quem estΓ‘ debatendo algo. Desenvolver software ainda Γ© um trabalho realizado por pessoas e para pessoas. NΓ£o seja um idiota. ## Desafios AlΓ©m das dicas, hΓ‘ alguns desafios ao final de cada seção do livro. EU me comprometi de fazer 3 desafios, para tentar melhorar ou pelo menos ter a experiΓͺncia. Vou deixar abaixo os desafios escolhidos: - Ajude a fortalecer a sua equipe examinando o cΓ³digo de seu ambiente de computação. Selecione duas ou trΓͺs "janelas quebradas" e discuta com seus colegas quais sΓ£o os problemas e o que poderia ser feito para resolvΓͺ-los. - Comece a aprender uma nova linguagem nesta semana [...] --- Sem dΓΊvidas O programador pragmΓ‘tico Γ© o melhor livro de conselhos de carreira para todos os desenvolvedores. Pena que em alguns campos esteja bem atrasado. Nada que uma interpretação de realidade nΓ£o resolva. > [8/100] Dias de CΓ³digo
loremimpsu
1,910,627
Twisted Handle Paper Bags: A Design Perspective
Twisted handle paper bags are more than just convenient carriers. From a design perspective, they...
0
2024-07-03T19:23:35
https://dev.to/zaman_ali_1/twisted-handle-paper-bags-a-design-perspective-3ocm
bags
Twisted handle paper bags are more than just convenient carriers. From a design perspective, they offer a captivating canvas for creativity, branding, and artistic expression. This article delves into the world of [twisted handle paper bags](https://elixirpackaging.com/twisted-handle-paper-bags/) through a designer's lens, exploring the various design elements that transform them from practical items to mini masterpieces. ## The Art of the Bag: Shape, Size, and Material The very foundation of a twisted handle paper bag's design lies in its shape, size, and material selection. Each element contributes to the overall aesthetic and functionality of the bag. Shape Symphony: Beyond the standard rectangular shape, explore options like Eurototes with flat bottoms and reinforced handles for a more upscale look. Consider gussetted bags that expand to accommodate bulky items while maintaining a compact profile when empty. Size Spectrum: Choose from a diverse range of sizes to fit the purpose. Small bags for delicate jewelry, medium sizes for books, and large bags for groceries – cater to the intended use while maintaining aesthetic balance. Material Marvels: Standard Kraft paper offers a natural charm. Opt for colored paper to complement your brand palette. Explore recycled paper content for an eco-friendly statement. Consider specialty papers like textured or metallic finishes for a touch of luxury. The Power of Print: From Logos to Storytelling Printing plays a pivotal role in transforming a twisted handle paper bag into a design statement. Here's where your creativity can truly shine: Logo Love: Place your logo prominently on the bag, ensuring brand recognition with every use. Explore different logo placements and sizes to achieve visual impact. Color Splash: Choose colors that align with your brand identity or evoke emotions associated with your products. Utilize a vibrant color palette to attract attention or opt for a minimalist, single-color approach for a sophisticated look. Typography Treats: Select fonts that complement your brand message and target audience. Consider playful fonts for children's stores or elegant scripts for luxury brands. Storytelling Through Design: Go beyond logos and explore creative illustrations, patterns, or photographs that tell your brand story and connect with customers on an emotional level. Beyond the Basics: Advanced Design Techniques For those seeking to push the boundaries, several advanced design techniques can elevate your twisted handle paper bags to new heights: Handle Flair: Move beyond traditional paper handles. Explore options like cotton rope handles for a natural look or colored handles that complement your design. Consider metallic handle accents for a touch of luxury. Window Wonders: Incorporate window panes into your bags, allowing customers to peek at the contents and increasing the appeal of your products. Specialty Coatings: Opt for specialty coatings like matte or glossy finishes to add a touch of sophistication or enhance the visual impact of your design. Foil Frenzy: For a truly eye-catching effect, consider using foil stamping to make your logo or design elements stand out. Function Meets Form: Design Considerations for Usability While aesthetics are important, a successful design must prioritize functionality. Here's how to ensure your twisted handle paper bag design is user-friendly: Readability Rules: Ensure any text on the bag is easy to read. Choose clear fonts and contrasting colors for optimal legibility. Information Architecture: Consider including relevant information on the bag, such as your company name, website, or social media handles. Place this information strategically without compromising the overall design. Durability Matters: Choose materials and printing techniques that can withstand everyday use. Consider factors like weight capacity and potential environmental conditions when selecting materials. The Final Touches: Sustainability and Responsible Design In today's eco-conscious world, responsible design plays a crucial role. Here are some ways to ensure your twisted handle paper bag design aligns with sustainability principles: Sustainable Materials: Opt for recycled paper content and explore alternative, plant-based materials whenever possible. Soy-Based Inks: Choose eco-friendly printing inks made from renewable resources like soy. Minimalist Approach: Less is often more. A clean, well-designed bag with minimal printing can be just as impactful as a heavily printed design. Reusable Focus: Encourage reuse by creating sturdy, well-constructed bags that customers can use for future shopping trips. Conclusion: A Canvas for Creativity and Sustainability Twisted handle paper bags offer a unique design canvas to express your brand identity, tell your story, and connect with customers. By employing creativity, exploring design options, and prioritizing both aesthetics and functionality, you can transform these eco-friendly heroes into miniature works of art, promoting your brand while contributing to a more sustainable future. So, unleash your design spirit, embrace responsible
zaman_ali_1
1,910,623
How to Automate User and Group Creation and Management Using a Simple Bash Script
One of the telling signs someone's a DevOps engineer is their unrivaled urge to automate process. It...
0
2024-07-03T19:23:17
https://dev.to/gideonisbuilding/how-to-automate-user-and-group-creation-and-management-using-a-simple-bash-script-1321
linux, devops, learning
One of the telling signs someone's a DevOps engineer is their unrivaled urge to automate process. It isn't just for automation sake, but to: 1. Enhance productivity 2. Reduce human error One such task is onboarding new developers to the team. This includes creating user accounts, assigning them to appropriate groups, and setting secure passwords. This article will walk you how you can streamline this process using a bash script. You don't have to be a pro at this, it shows you why each step is important. What does this script do? The script automates the following tasks: * Prompts the user for the path to an employee configuration file. * Validates the provided file path. * Creates necessary directories and sets appropriate permissions. * Checks for and installs the `makepasswd` tool if not already present. * Generates secure passwords for new users. * Creates user accounts and assigns them to specified groups. * Logs all actions for auditing purposes. Let's break down each section of the script. ## Color-Coded Output Functions ``` bash #!/bin/bash #-------------------------- # Function to echo in green #-------------------------- green_echo() { echo -e "\e[32m$1\e[0m" } #------------------------ # Function to echo in red #------------------------ red_echo() { echo -e "\e[31m$1\e[0m" } ``` These functions, `green_echo` and `red_echo`, are used to print messages in green and red. The idea is to help you differentiate between successful and error messages, making your script's output more readable. Typically, output messages from running scripts are in white (against the set background of your terminal) making it difficult to see errors among the sea of output messages. Red and green-codded outputs make it easier to scan your script output. ## Checking if a File Argument Was Provided ```bash #--------------------------------------------------------- # Check if a file path argument is provided and validate it #--------------------------------------------------------- if [ -z "$1" ]; then red_echo "Error: No file path provided. Please provide the employee config file path as the first argument." exit 1 fi EMPLOYEE_CONFIG_FILE="$1" if [ ! -f "$EMPLOYEE_CONFIG_FILE" ]; then red_echo "Error: The file '$EMPLOYEE_CONFIG_FILE' does not exist or is not a regular file." exit 1 fi green_echo "File path is valid." ``` This portion of the script confirms the argument the user attaches to the script is a valid file and exists. If the file path is invalid, it will prompt the user to try again, ensuring an argument is provided and it exists before proceeding. ## Defining File Paths ```bash #----------------------------------------- # Variables for the password and log files #----------------------------------------- PASSWORD_FILE="/var/secure/user_passwords.txt" LOG_FILE="/var/log/user_management.log" ``` These variables define the paths for storing generated passwords and logs. It is standard DevOps practice to store passwords in a file. And for even better practice, ensures these files are secure and only readable by appropriate personnel. Another standard practice is logging actions. It helps you in auditing and troubleshooting should you be stuck. Defining the variables for the paths to these files early on makes it easier to maintain the script. ## Creating and Securing Directories ```bash #---------------------------------------------------------- # Create necessary directories with appropriate permissions #---------------------------------------------------------- sudo mkdir -p /var/secure sudo mkdir -p /var/log sudo chmod 600 /var/secure ``` This script block creates the directories to the files for storing passwords and logs if they do not already exist. It also sets strict permissions (`chmod 600`) on the `/var/secure` directory so that only the user can read and write to it, enhancing security. ## Ensuring `makepasswd` is Installed ```bash #---------------------------------------------- # Checking and ensuring makepasswd is installed #---------------------------------------------- if ! command -v makepasswd &>/dev/null; then sudo apt-get update && sudo apt-get install -y makepasswd fi ``` `makepasswd` is a tool used to generate random passwords. It doesn't come preinstalled in most Linux/Unix systems. So, you'd have to check if it is installed before moving on to other parts. This block checks if `makepasswd` is installed and installs it if necessary. Ensuring this dependency is met is crucial for the script to function correctly. ## Password Generation Function ```bash #-------------------------------------------- # Generate a random password of 16 characters #-------------------------------------------- generate_password() { makepasswd --chars 16 } ``` This function uses `makepasswd` to generate a random password of 16 characters. Strong passwords are essential for security, and automating their generation helps maintain consistency and complexity. ## Clearing Previous Logs and Passwords ```bash #-------------------------------------- # Clear previous log and password files #-------------------------------------- sudo truncate -s 0 "$LOG_FILE" sudo truncate -s 0 "$PASSWORD_FILE" ``` Before starting the onboarding process, the script clears previous log and password files. This ensures that the new run starts with clean files, preventing confusion with old data. Note that if you use `echo "" | sudo tee "$LOGFILE" > /dev/null` or `echo "" | sudo tee "$PASSWORD_FILE" > /dev/null` instead, what you get instead will be a file with an empty line. Any lines appended to the file will begin from the next line, leaving an empty first line. Using truncate sets the file to whatever size you specify. And in this case, it's zero bytes which effectively clears the file" ## Reading and Processing the Configuration File ```bash while IFS=';' read -r username groups; do #---------------------------------------- # Remove leading and trailing whitespaces #---------------------------------------- username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) #----------------- # Skip empty lines #----------------- [ -z "$username" ] && continue #--------------------------------- # Split the groups field by commas #--------------------------------- IFS=',' read -ra group_array <<<"$groups" ``` The script reads the configuration file line by line, splitting each line into a username and groups. It goes further to remove any leading or trailing whitespace to ensure clean data processing. There's the added contingency to skip empty lines to prevent errors. ## Creating Users and Setting Passwords ```bash #--------------------------------- # Check if the user already exists #--------------------------------- if id "$username" &>/dev/null; then red_echo "The user $username already exists." | sudo tee -a "$LOG_FILE" else sudo useradd -m -s /bin/bash "$username" && green_echo "The user $username has been created." | sudo tee -a "$LOG_FILE" #--------------------------- # Generate a random password #--------------------------- password=$(generate_password) #------------------------ # Set the user's password #------------------------ echo "$username:$password" | sudo chpasswd echo "$username:$password" | sudo tee -a "$PASSWORD_FILE" fi ``` This block checks if the user already exists. If the user does not exist, it creates the user with a home directory and Bash shell. A random password is generated and set for the user. The username and password are logged for record-keeping. ## Creating Primary Groups ```bash #-------------------------------------------------------- # Create a primary group for the user if it doesn't exist #-------------------------------------------------------- if ! getent group "$username" >/dev/null; then sudo groupadd "$username" && green_echo "Primary group $username created." | sudo tee -a "$LOG_FILE" fi ``` For each new user, the script checks if a primary group with the same name exists. If not, it creates the group. This ensures that each user has a corresponding group, which is a common practice in Unix-like systems. ## Assigning Users to Additional Groups ```bash for group in "${group_array[@]}"; do if ! getent group "$group" >/dev/null; then sudo groupadd "$group" && green_echo "Group $group created." | sudo tee -a "$LOG_FILE" fi sudo usermod -aG "$group" "$username" && green_echo "User $username added to group $group." | sudo tee -a "$LOG_FILE" done ``` The script processes each group specified for the user. It checks if the group exists and creates it if necessary. It then adds the user to each group, ensuring they have the required permissions and access. ## Setting Home Directory Permissions ```bash #------------------------------- # Set home directory permissions #------------------------------- sudo chown -R "$username":"$username" "/home/$username" sudo chmod 700 "/home/$username" done <"$EMPLOYEE_CONFIG_FILE" green_echo "User onboarding script completed. See $LOG_FILE for details." ``` Finally, the script sets the ownership and permissions for the user's home directory. This ensures that the user has the necessary access to their files while also securing the directory from unauthorized access. To run the script, make it executable. ```bash chmod +x create_users.sh ``` Then execute it: ```bash ./create_users.sh new_developers.txt ``` So the script runs as intended, ensure that the file the script reads is written and formatted this way (username to the left of the semi-colon and user groups to the right): ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` The assumption behind this example is that a user can have multiple groups, each group delimited by a comma ",". Here's the [link to the GitHub repo](https://github.com/GideonIsBuilding/hng-task_01) if you want to take a look at it closely. # Final Words... When you automate the user onboarding process with a Bash script ensures consistency, accuracy, and security. By following the steps outlined in this article, you can streamline the creation of user accounts, assignment of groups, and setting of secure passwords, all while maintaining detailed logs for auditing purposes. To improve the security of the script, you can: 1. Hash the passwords after generation 2. Configure the script to send each employee their login details to their emails Tweak and improve as desired. Looking for projects like this to help you better understand DevOps concepts, enroll for [HNG Internship](https://hng.tech/internship). If you are already a professional looking for a community of professionals to collaborate with and have recruiters review your resumes, then opt for the [HNG Premium](https://hng.tech/premium) package.
gideonisbuilding
1,910,625
User Management in Linux using a bash script
Managing users on Linux systems can be daunting, especially in environments with frequent changes. As...
0
2024-07-03T19:15:25
https://dev.to/ambroseotundo/user-management-in-linux-using-a-bash-script-31l9
Managing users on Linux systems can be daunting, especially in environments with frequent changes. As part of my online internship at [https://hng.tech/internship](url), I've been assigned a task to develop a Bash script to automate this process. Let's dive into how this script works: #### 1. Initial Setup ```bash # Check if script is run with sudo if [ "$(id -u)" != "0" ]; then echo "This script must be run with sudo. Exiting..." exit 1 fi # Redirect stdout and stderr to log file exec > >(tee -a "$LOG_FILE") 2>&1 echo "Executing script... (note that this line will be logged twice)" | tee -a $LOG_FILE ``` #### 2. Input Validation ```bash # Check if an argument was provided if [ $# -eq 0 ]; then echo "No file path provided." echo "Usage: $0 <user-data-file-path>" exit 1 fi # Check if the user's data file exists if [ ! -e "$USERS_FILE" ]; then echo "The provided user's data file does not exist: $USERS_FILE" exit 1 fi ``` #### 3. Dependency Management ```bash # Function to check if a package is installed is_package_installed() { dpkg -s "$1" >/dev/null 2>&1 } # Check if openssl is installed if ! is_package_installed openssl; then echo "openssl is not installed. Installing..." sudo apt-get update sudo apt-get install -y openssl fi # Check if pwgen is installed if ! is_package_installed pwgen; then echo "pwgen is not installed. Installing..." sudo apt-get update sudo apt-get install -y pwgen fi ``` #### 4. File and Directory Management ```bash # Create the directory where the user's password file will be stored sudo mkdir -p "$PASSWORD_FILE_DIRECTORY" # load the content of the users.txt file into an array: lines mapfile -t lines < "$USERS_FILE" ``` #### 5. User Creation and Management ```bash # loop over each line in the array for line in "${lines[@]}"; do # Remove leading and trailing whitespaces line=$(echo "$line" | xargs) # Split line by ';' and store the second part IFS=';' read -r user groups <<< "$line" # Remove leading and trailing whitespaces from the second part user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) # Check if user exists if id "$user" &>/dev/null; then echo "User $user already exists. Skipping creation." continue fi ``` #### 6. Password Handling ```bash # Generate a 6-character password using pwgen password=$(pwgen -sBv1 6 1) # Encrypt the password before storing it encrypted_password=$(encrypt_password "$password" "$PASSWORD_ENCRYPTION_KEY") # Store the encrypted password in the file echo "$user:$encrypted_password" >> "$PASSWORD_FILE" ``` #### 7. User and Group Operations ```bash # Create the user with the generated password sudo useradd -m -p $(openssl passwd -6 "$password") "$user" # Set Bash as the default shell set_bash_default_shell "$user" # loop over each group in the groups array for group in "${groupsArray[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not, create it if ! grep -q "^$group:" /etc/group; then sudo groupadd "$group" echo "Created group $group" fi # Add user to the group sudo usermod -aG "$group" "$user" echo "Added $user to $group" done ``` #### 8. Conclusion and Cleanup ```bash # remove the created password from the current shell session unset password ``` After completing this task and the follow-up tasks to stage 10 in the internship, I believe I will be ready for the job market in the DevOps field. If you are a hiring manager reading this article, you can get exceptional talent here [https://hng.tech/hire] (URL). Thank you for reading my article. Author: Ambrose Onsare
ambroseotundo
1,910,624
User account creation using BASH
As a SysOps engineer, managing user accounts and groups efficiently is crucial for maintaining system...
0
2024-07-03T19:12:22
https://dev.to/uduakobongudombat/user-account-creation-using-bash-4en
bash, devops, cloud, linux
As a SysOps engineer, managing user accounts and groups efficiently is crucial for maintaining system security and access control. In this article, we’ll explore how to create a Bash script called create_users.sh that reads input from a text file, creates users and groups, sets up home directories, generates random passwords, logs actions, and securely stores passwords. Prerequisites Before we dive into the script, make sure you have the following: - A Linux system (e.g., Ubuntu, CentOS) with Bash installed. - Basic knowledge of Bash scripting. - The create_users.sh Script ```bash #!/bin/bash # automating user account creation # Input file (usernames and groups) input_file="$1" # Log file log_file="/var/log/user_management.log" # Secure password storage file password_file="/var/secure/user_passwords.txt" # create secure directory sudo mkdir -p /var/secure # Function to generate a random password generate_password() { # using 'openssl rand -base64 12’ to generate a 12-character password openssl rand -base64 12 } # Read input file line by line while IFS=';' read -r username groups; do # Create groups if they don't exist for group in $(echo "$groups" | tr ',' ' '); do groupadd "$group" 2>/dev/null || echo "Group $group already exists" done # Create user useradd -m "$username" -G "$groups" 2>/dev/null || echo "User $username already exists" # Set password password=$(generate_password) echo "$username:$password" | chpasswd # Log actions echo "$(date '+%Y-%m-%d %H:%M:%S') - Created user $username with groups: $groups" >> "$log_file" # Store password securely echo "$username:$password" >> "$password_file" done < "$input_file" ``` ### Explanation: - **Input File** ($input_file): The script expects an input file containing lines with the format username; groups. Each user can belong to multiple groups (comma-separated). - **Directory Creation**: This is needed to make the location defined above available. If this is not available the code will still run but the passwords will not be logged as required. This step has to be done with elevated privileges, hence sudo. - **Group Creation:** The script reads the input file and creates groups (if they don’t exist) using groupadd. Existing groups are skipped, and a message is logged. - **User Creation:** Users are created using useradd with the specified groups. Existing users are skipped, and a message is logged. - **Password Generation:** The generate_password function generates random passwords (customize as needed). Passwords are set using chpasswd. - **Logging:** Actions (user creation, group creation) are logged with timestamps in $log_file. - **Secure Password Storage:** Usernames and passwords are stored securely in $password_file. **- Usage** - Make the script executable: chmod +x create_users.sh - Run the script with the input file: ./create_users.sh input.txt This was a stage 1 DevOps Task at [HNG](https://hng.tech/internship). You can check for available roles at HNG [here](https://hng.tech/hire) This entire code is available at https://github.com/Uduakobong-Udombat/user_account_automation_with_bash
uduakobongudombat
1,909,596
Front End Technologies
Front end development is all about creating the part of a website or a software application that...
0
2024-07-03T19:08:07
https://dev.to/n0tsam/front-end-technologies-18mi
Front end development is all about creating the part of a website or a software application that users see and interact with, in simple terms the visual aesthetics for example animations,buttons and more. Today I shall take you through essential technologies that help achieve front end development. First is Hyper markup Language(HTML). HTML provides structure and content of web pages. It is considered the skeleton of a website, HTML by itself in my opinion is ugly that is where we go into the second technology known as Cascading Style Sheets(CSS). Do you remember how I said HTML is ugly? CSS is meant to change this by styling HTML elements by adding colors,fonts and cool animations and layouts. Good luck centering that div. The third technology is Javascript. Javascript is a versatile language used for addind interactivity to web pages. You will definitely pull your hair once in a while trying to understand Javascript under the hood. The fourth technology is responsive design. It enables us to create websiites that adapt to different screen sizes for smooth viewing experience. Finally we have Front end frameworks and libraries that provide reusable components. if you are wondering how to learn how to use the technologies like a pro check out HNG [HNG INTERNSIP PROGRAMME](https://hng.tech/internship) [HNG CAREERS](https://hng.tech/hire)
n0tsam
1,910,621
facebook app review help
I have created a SaaS-based project by integrating Facebook, but the Facebook API requires advanced...
0
2024-07-03T19:07:47
https://dev.to/yemani_b81be4f85f866c8e77/facebook-app-review-help-18bg
I have created a SaaS-based project by integrating Facebook, but the Facebook API requires advanced access permissions. Can anyone help me with the app review of my Facebook app?
yemani_b81be4f85f866c8e77
1,908,971
Axios vs Fetch
Introduction In modern web development, making HTTP requests is a fundamental task....
0
2024-07-03T19:07:28
https://dev.to/wafa_bergaoui/axios-vs-fetch-543c
http, javascript, development
## **Introduction** In modern web development, making HTTP requests is a fundamental task. Whether you're fetching data from a server, submitting forms, or interacting with APIs, you need reliable tools to handle these operations. While JavaScript provides a built-in fetch API for making HTTP requests, many developers opt for third-party libraries like Axios for added functionality and convenience. ## **Why Do We Need HTTP Request Tools?** Handling HTTP requests and responses can be complex, especially when considering error handling, response parsing, and request configuration. Tools like Axios and Fetch simplify these tasks by providing abstractions and utilities that streamline the process. They help address common problems such as: **Boilerplate Code:** Simplifying repetitive tasks like setting headers and handling JSON responses. **Error Handling:** Providing more consistent and manageable error handling mechanisms. **Interceptors:** Allowing pre-processing of requests or responses, such as adding authentication tokens. ## **Fetch API** The Fetch API is a modern, built-in JavaScript method for making HTTP requests. It is promise-based, providing a more straightforward way to work with asynchronous operations compared to older methods like XMLHttpRequest. **Example** ```javascript // Making a GET request using Fetch fetch('https://api.example.com/data') .then(response => { if (!response.ok) { throw new Error('Network response was not ok'); } return response.json(); }) .then(data => console.log(data)) .catch(error => console.error('There was a problem with the fetch operation:', error)); ``` ## **Axios** Axios is a popular third-party library for making HTTP requests. It is promise-based like Fetch but includes many additional features that make it more convenient and powerful. **Example** ```javascript // Making a GET request using Axios axios.get('https://api.example.com/data') .then(response => console.log(response.data)) .catch(error => console.error('There was a problem with the axios request:', error)); ``` ## **Key Differences** **1. Default Handling of JSON** **- Fetch:** Requires manual conversion of response data to JSON. ```javascript fetch('https://api.example.com/data') .then(response => response.json()) // Manual conversion .then(data => console.log(data)); ``` **- Axios:** Automatically parses JSON responses. ```javascript axios.get('https://api.example.com/data') .then(response => console.log(response.data)); // Automatic conversion ``` **2. Error Handling** **- Fetch:** Only rejects a promise for network errors, not for HTTP errors (e.g., 404 or 500 status codes). ```javascript fetch('https://api.example.com/data') .then(response => { if (!response.ok) { throw new Error('Network response was not ok'); } return response.json(); }) .catch(error => console.error('Fetch error:', error)); ``` **- Axios:** Rejects a promise for both network errors and HTTP errors. ```javascript axios.get('https://api.example.com/data') .catch(error => console.error('Axios error:', error)); ``` **3. Request Configuration** **- Fetch:** Requires manual configuration of options like headers and method ```javascript fetch('https://api.example.com/data', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ key: 'value' }) }); ``` **- Axios:** Provides a more concise and readable syntax for configuration. ```javascript axios.post('https://api.example.com/data', { key: 'value' }, { headers: { 'Content-Type': 'application/json' } }); ``` ## **Interceptors** Interceptors are a powerful feature in Axios that allow you to intercept requests or responses before they are handled by then or catch. They are useful for tasks like adding authentication tokens to every request or handling errors globally. **Example of Axios Interceptors** ```javascript // Adding a request interceptor axios.interceptors.request.use(config => { // Add an authorization header to every request const token = 'your_token'; if (token) { config.headers.Authorization = `Bearer ${token}`; } return config; }, error => { // Handle request error return Promise.reject(error); }); // Adding a response interceptor axios.interceptors.response.use(response => { // Handle successful response return response; }, error => { // Handle response error if (error.response.status === 401) { // Handle unauthorized error console.error('Unauthorized'); } return Promise.reject(error); }); ``` ## **Conclusion** Both Fetch and Axios are capable tools for making HTTP requests, but they offer different levels of convenience and functionality. Fetch is a native, modern option that works well for simple use cases, while Axios provides a richer feature set, including automatic JSON handling, better error management, and interceptors for request and response manipulation. Choosing between them depends on the specific needs of your project and your preference for simplicity versus functionality. By understanding the strengths and differences of each tool, you can make more informed decisions and write more efficient and maintainable code.
wafa_bergaoui
1,910,619
Colorado Yurts
Colorado Yurt is a company that specializes in crafting high-quality yurts, tipis, and tents. With a...
0
2024-07-03T19:06:40
https://dev.to/muhammad_bilal_b2e13f5677/colorado-yurts-3b8p
marketing
[Colorado Yurt](url) is a company that specializes in crafting high-quality yurts, tipis, and tents. With a focus on durability, craftsmanship, and customization, they provide unique and versatile structures for various uses, from glamping and outdoor adventures to residential and commercial purposes. Their products are designed to withstand diverse weather conditions, offering comfort and style in any setting. Colorado Yurt is committed to sustainability and customer satisfaction, ensuring each structure meets the specific needs and preferences ofΒ theirΒ clients. [](url)https://coloradoyurt.com/
muhammad_bilal_b2e13f5677
1,910,618
Tudo o que VocΓͺ Precisa Saber sobre APIs
As APIs, ou Interfaces de Programação de Aplicaçáes, são fundamentais para a construção de software...
0
2024-07-03T19:06:09
https://dev.to/annalaura2/tudo-o-que-voce-precisa-saber-sobre-apis-8jl
As APIs, ou Interfaces de Programação de Aplicaçáes, sΓ£o fundamentais para a construção de software moderno. Elas permitem que diferentes sistemas se comuniquem e compartilhem dados de forma eficiente. Neste artigo, vamos explorar em detalhes o que sΓ£o APIs, como funcionam, seus diferentes tipos, prΓ‘ticas recomendadas para design e seguranΓ§a, alΓ©m de exemplos prΓ‘ticos. ## 1. **O que Γ© uma API?** ### a. Definição Uma API (Application Programming Interface) Γ© um conjunto de definiçáes e protocolos que permite a comunicação entre diferentes aplicaçáes de software. Em termos simples, Γ© uma ponte que permite que um software "converse" com outro. ### b. Componentes de uma API - **Endpoint:** URL especΓ­fica onde a API pode ser acessada. - **MΓ©todo HTTP:** Ação que serΓ‘ realizada (GET, POST, PUT, DELETE). - **CabeΓ§alhos HTTP:** ContΓͺm informaçáes sobre a solicitação e a resposta (ex.: Content-Type). - **Corpo da Solicitação:** Dados enviados com a solicitação (geralmente em JSON). - **Corpo da Resposta:** Dados recebidos da API. ### c. Como as APIs Funcionam APIs funcionam por meio de solicitaçáes e respostas. Um cliente (por exemplo, um navegador ou uma aplicação) faz uma solicitação a um servidor atravΓ©s de um endpoint da API. O servidor processa a solicitação e retorna uma resposta. ## 2. **Tipos de APIs** ### a. APIs RESTful REST (Representational State Transfer) Γ© um estilo arquitetural amplamente utilizado. APIs RESTful usam HTTP para fazer solicitaçáes e seguir princΓ­pios de design simples. - **Recursos:** Representaçáes de entidades (ex.: usuΓ‘rios, produtos). - **Verbos HTTP:** GET (recuperar), POST (criar), PUT (atualizar), DELETE (remover). - **Stateless:** Cada solicitação Γ© independente e nΓ£o mantΓ©m estado entre as solicitaçáes. ### b. APIs SOAP SOAP (Simple Object Access Protocol) Γ© um protocolo baseado em XML que permite a troca de informaçáes estruturadas. Γ‰ mais complexo e pesado que REST. - **Mensagens XML:** Solicitaçáes e respostas sΓ£o estruturadas em XML. - **WS-Security:** Oferece recursos de seguranΓ§a avanΓ§ados. - **Opera sobre vΓ‘rios protocolos:** Pode usar HTTP, SMTP, TCP, etc. ### c. APIs GraphQL GraphQL Γ© uma linguagem de consulta para APIs que permite ao cliente especificar exatamente quais dados precisa. - **Consultas FlexΓ­veis:** O cliente pode pedir apenas os dados necessΓ‘rios. - **Menos Solicitaçáes:** Reduz a necessidade de mΓΊltiplas chamadas Γ  API. - **Tipos Fortes:** Definidos por um esquema GraphQL. ### d. APIs de Streaming APIs de Streaming, como WebSockets, permitem a comunicação em tempo real entre cliente e servidor. - **ConexΓ£o Bidirecional:** Permite comunicação em ambos os sentidos. - **Baixa LatΓͺncia:** Ideal para aplicativos que exigem atualizaçáes em tempo real. ## 3. **Design de APIs** ### a. PrincΓ­pios de Design - **ConsistΓͺncia:** Mantenha um padrΓ£o uniforme em toda a API. - **Simplicidade:** Torne a API fΓ‘cil de entender e usar. - **Documentação:** ForneΓ§a documentação clara e detalhada. ### b. URLs e Endpoints - **Clareza:** Use URLs claras e descritivas. - **Hierarquia:** Estruture endpoints de forma hierΓ‘rquica. - **Pluralidade:** Use substantivos plurais para recursos (ex.: /users, /products). ### c. Verbos HTTP - **GET:** Recupera dados. - **POST:** Cria novos recursos. - **PUT/PATCH:** Atualiza recursos existentes. - **DELETE:** Remove recursos. ### d. Status Codes Utilize cΓ³digos de status HTTP apropriados para indicar o resultado das solicitaçáes. - **200 OK:** Solicitação bem-sucedida. - **201 Created:** Recurso criado com sucesso. - **400 Bad Request:** Solicitação malformada. - **401 Unauthorized:** Autenticação necessΓ‘ria. - **404 Not Found:** Recurso nΓ£o encontrado. - **500 Internal Server Error:** Erro no servidor. ## 4. **SeguranΓ§a em APIs** ### a. Autenticação e Autorização - **Token-based Authentication:** Use tokens (como JWT) para autenticação. - **OAuth:** Protocolo padrΓ£o para autorização segura. - **API Keys:** Chaves de API para autenticação bΓ‘sica. ### b. Criptografia - **HTTPS:** Use HTTPS para criptografar a comunicação entre cliente e servidor. - **TLS:** Protocolo para garantir a privacidade e a integridade dos dados. ### c. Controle de Acesso - **Rate Limiting:** Limite o nΓΊmero de solicitaçáes para prevenir abusos. - **CORS:** Configure CORS (Cross-Origin Resource Sharing) para controlar o acesso a partir de diferentes domΓ­nios. ### d. Validação de Dados - **Sanitização:** Limpe os dados de entrada para prevenir injeção de cΓ³digo. - **Validação:** Verifique a conformidade dos dados de entrada. ## 5. **Boas PrΓ‘ticas para APIs** ### a. Versionamento Versione sua API para gerenciar mudanΓ§as e manter compatibilidade. # Exemplo de URL versionada ``` https://api.exemplo.com/v1/recursos ``` ### b. Documentação - **Swagger/OpenAPI:** Ferramentas para gerar documentação interativa. - **Exemplos de Uso:** Inclua exemplos de solicitaçáes e respostas. ### c. Tratamento de Erros ForneΓ§a mensagens de erro claras e informativas. ``` json { "error": { "code": 400, "message": "Invalid input data", "details": "The 'email' field is required." } } ``` ### d. Monitoramento e Logs Implemente monitoramento e logging para rastrear o uso da API e identificar problemas. ## 6. **Exemplos PrΓ‘ticos** ### a. Exemplo de API RESTful #### Estrutura de Recursos ``` text GET /users GET /users/{id} POST /users PUT /users/{id} DELETE /users/{id} ``` #### Exemplo de Solicitação GET ``` GET /users/123 HTTP/1.1 Host: api.exemplo.com Authorization: Bearer {token} ``` #### Exemplo de Resposta GET ``` json { "id": 123, "name": "John Doe", "email": "[email protected]" } ``` ### b. Exemplo de API GraphQL #### Consulta ``` graphql { user(id: 123) { id name email } } ``` #### Resposta ``` json { "data": { "user": { "id": 123, "name": "John Doe", "email": "[email protected]" } } } ``` ## ConclusΓ£o APIs sΓ£o a espinha dorsal da comunicação entre sistemas no mundo digital. Entender como projetar, implementar e manter APIs de forma eficiente e segura Γ© essencial para qualquer desenvolvedor moderno. Ao seguir as melhores prΓ‘ticas e princΓ­pios descritos neste artigo, vocΓͺ estarΓ‘ no caminho certo para criar APIs robustas, escalΓ‘veis e fΓ‘ceis de usar. Feliz codificação!
annalaura2
1,910,617
Linux Bash Script User Creation
Hi Enthusiast DevOps Engineer, Today we'll be having a deep insight on the need to write an...
0
2024-07-03T19:06:07
https://dev.to/dimeji_ojewunmi_5e27256/linux-bash-script-user-creation-ml8
Hi Enthusiast DevOps Engineer, Today we'll be having a deep insight on the need to write an automated **users bash script** in an organization, and also having all the metrics and password auto generated stored in a LOG_FILE. The main reason for the creation of the **users bash scrip** is to mitigate repetitive user/group creation task manually, which might be daunting in most cases, and also to have an autonomous control of employees who have assess to the organization Linux workflow environment. Before i proceed, the user bash script creation is in curtesy of (**HNG Internship**) program, which has just kicked off for the year 2024 "11th cohort", helping various beginners and intermediate tech enthusiast in having a real world project experience. To learn more about HNG internship program, kindly visit {% embed https://hng.tech/internship %}, {% embed https://hng.tech/premium %}. Now lets delve to the business of the day, seat back, make use of your seat belt, while we journey through this together. Firstly, we'll create a text file called `users` filled with the content below, which is dependent on our bash script file to execute its functions. ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data dmex; devops,sysops iosegbo; sysops ``` Secondly we'll be creating a file called `create_users.sh` which will run as our bash script file with the code below ``` #!/bin/bash # Autogenerate Password and Metrics will be sent here LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # This will create a /var/secure file, and also assign a <write, read & execution> permission to the created file. mkdir -p /var/secure chmod 700 /var/secure # Create or clear the log and password files > $LOG_FILE > $PASSWORD_FILE chmod 600 $PASSWORD_FILE # Function to generate random password generate_password() { echo $(openssl rand -base64 12) } # Read the input file, which is the <users> file we created to be dependent on our bash script code function execution INPUT_FILE=$1 # Process each line in the file while IFS= read -r line; do # Ignore characters before the semicolon after_semicolon="${line#*;}" # Split the line into items separated by commas IFS=',' read -ra items <<< "$after_semicolon" # Ensure a group exists for each item for item in "${items[@]}"; do item=$(echo "$item" | xargs) # Trim whitespace if [ ! -z "$item" ]; then if ! getent group "$item" > /dev/null; then echo "Creating group: $item" sudo groupadd "$item" else echo "Group already exists: $item" fi fi done done < "$INPUT_FILE" # Process each line in the input file while IFS=';' read -r user groups; do # Trim whitespace user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) # Create user with home directory and primary group if ! id "$user" &>/dev/null; then useradd -m "$user" echo "User $user was created successfully." >> $LOG_FILE else echo "User $user already exists." >> $LOG_FILE fi # Set user's groups if [ -n "$groups" ]; then usermod -aG $groups "$user" echo "User $user added to group: $groups." >> $LOG_FILE fi # Auto Generate and set password password=$(generate_password) echo "$user:$password" | chpasswd echo "$user,$password" >> $PASSWORD_FILE echo "Password for user $user set." >> $LOG_FILE done < "$INPUT_FILE" ``` Thirdly, we'll have to assign an execution permission to the created bash script file `chmod 700 create_users.sh` Lastly, the command to execute our bash script file is `sudo ./create_users.sh users` The command above will execute the above bash script function which is written as a code, enabling all the users to be assigned to the specified group and also enabling each users to have an assigned encrypted password which will be redirected or logged to a `LOG_FILE` encoded in the script. Here are the list of commands to execute on our Linux VM to check the following output of our bash script block of code **sudo cat /var/log/user_management.log** (This cmd checks for the activities which has taken place in the course of the bash script execution i.e "MERTICS"). **sudo cat /var/secure/user_passwords.txt** (This cmd checks for the auto generated password assigned to each users). To confirm validity of your user creation, simply execute the following command below `su dmex` Then copy the encrypted password generated for `dmex` under **sudo cat /var/secure/user_passwords.txt** Thank you for taking your time in reading through to the completion of the blog post, do have a wonderful time.
dimeji_ojewunmi_5e27256
1,908,152
Exploring Pigment CSS, a zero-runtime CSS-in-JS library by MUI
Written by Ivy Walobwa✏️ CSS-in-JS is a modern approach to styling web applications where CSS is...
0
2024-07-03T19:04:15
https://blog.logrocket.com/exploring-pigment-css-zero-runtime-css-in-js-library-mui
css, webdev
**Written by [Ivy Walobwa](https://blog.logrocket.com/author/ivywalobwa/)✏️** CSS-in-JS is a modern approach to styling web applications where CSS is written directly in JavaScript files. This allows CSS styles to be scoped into a component. Many developers, especially in the React community, have adopted the CSS-in-JS approach. The most popular CSS-in-JS libraries are [styled-components](https://styled-components.com/) and [Emotion](https://emotion.sh/). MUI, a React component library, recently released a promising, zero-runtime CSS-in-JS library called [Pigment CSS](https://github.com/mui/material-ui/tree/master/packages/pigment-css-react). This article will explore the features and benefits of Pigment CSS, offering a comparison to styled-components and Emotion based on performance, features, developer experience, and community support. ## Pigment CSS ![Pigment CSS](https://blog.logrocket.com/wp-content/uploads/2024/06/pigment-css.png) Pigment CSS is a zero-runtime CSS-in-JS library maintained by the Material UI team and built on top of [WyW-in-JS](https://wyw-in-js.dev/), or β€œWhatever-you-want-in-JS.” It extracts colocated styles to their own CSS files at build time. Pigment CSS currently supports Next.js and Vite. The concept of zero-runtime CSS-in-JS seeks to combine the benefits of CSS-in-JS with the performance benefits of traditional CSS. Here, all styles are compiled to static CSS files at build time, eliminating runtime overhead. This improves performance, especially on initial page loads. ## Key features of Pigment CSS Pigment CSS offers the benefits of CSS-in-JS, such as locally scoped styles and themeability, while avoiding the runtime performance cost typically associated with CSS-in-JS libraries. Some of the key features include: ### Zero runtime Styles are preprocessed during the build phase. No styles are injected and recalculated during runtime. This improves the performance of the application. ### Theming Theming is an optional feature that lets you reuse the same style values across your application. The theme objects are only used at build time and are not included in the final JavaScript bundle. ### Integration with build tools Pigment CSS seamlessly integrates with Next.js and Vite with support for more bundlers in the future. This makes it easy to add Pigment CSS to existing React applications without significant configuration. Pigment CSS allows you to define CSS in two ways: using object-style syntax, where styles are defined using JavaScript objects, or template-style syntax, where CSS styles are written using template literals. ## Comparing Pigment CSS, Emotion, and styled-components Pigment CSS uses zero-runtime CSS-in-JS, while [styled-components](https://blog.logrocket.com/using-styled-components-react/) uses runtime CSS-in-JS, and [Emotion](https://blog.logrocket.com/emotion-in-react/) uses runtime CSS-in-JS with options for extracting static styles. The three libraries can be compared as follows: * **Performance**: Due to its zero-runtime approach, Pigment CSS has reduced runtime performance but increased build time * **Bundle size impact**: Pigment CSS styles compile to pure CSS files at build time, thus having minimal impact on the JavaScript bundle size. On the other hand, styled-components inject styles into the DOM at runtime using JavaScript. Emotion can operate in two modes β€” pure runtime or static style extraction * **Dynamic styling**: Pigment CSS requires you to declare all styles and account for all combinations of props. styled-components and Emotion excel in applications where styles need to react dynamically to component states or props * **Developer experience**: Pigment CSS offers a developer experience similar to styled-components and Emotion. All three enhance readability by keeping styles closely tied to components. However, a notable difference lies in how the styles are defined: in Pigment CSS, unlike passing props for conditional styling, all style variants must be pre-defined * **Tooling and ecosystem**: styled-components and Emotion have a slight edge in terms of integrations and community resources due to their popularity and longer history in the market Check out our [article comparing styled-components and Emotion](https://blog.logrocket.com/styled-components-vs-emotion-for-handling-css/) for a deeper dive. ## Using Pigment CSS To use Pigment CSS, you must first configure it in your Next.js or Vite application. In this tutorial, we’ll use a Next.js app. Pigment CSS simplifies the creation of reusable styles and components for your application by providing various APIs. You can use the `css` API to create reusable styles, the `styled` API to create a component by passing styles at the end, or the `keyframes` API to create reusable animation keyframes. A theme object can also be used to reuse the same styling values across your application. ### Setting up Pigment CSS In your Next.js application, install Pigment CSS using the following command: ```yarn npm install @pigment-css/react npm install --save-dev @pigment-css/nextjs-plugin ``` This command installs the Pigment CSS library and the Next.js plugin. Next, in your `next.config.mjs` file, import the `withPigment` plugin, and wrap the Next.js config as shown below: ```javascript import { withPigment } from '@pigment-css/nextjs-plugin'; const nextConfig = {}; export default withPigment(nextConfig); ``` In your `layout.tsx` file, import the Pigment CSS stylesheet as shown: ```javascript import '@pigment-css/react/styles.css'; ``` With that, you’re ready to make use of Pigment CSS styles. ## Basic usage of Pigment CSS First, we’ll use the `css` API to create our styles. You can use the `template` or `object` syntaxes as shown below: ```javascript import {css } from "@pigment-css/react"; // template syntax const bodyBackground = css` background-color: #1D2125; color: #fff; `; // object syntax const mainClass = css({ display: "flex", flexDirection: "column", alignItems: "center", justifyContent: "center", }); ``` To apply the styles to your DOM element, add the styling as a class name to your element: ```html <html lang="en" className={bodyBackground}> </html> <main className={mainClass}> </main> ``` Next, we’ll use the `styled` API to create our styled components. Here, we create a styled heading and add variants based on the props: ```javascript const Heading = styled('div')({ fontSize: "2rem", color: "#9FADBC", fontWeight: "bold", margin: "1rem", variants: [ { props: { variant: 'success' }, style: { color: '#23AD79' }, }, { props: { size: 'small' }, style: { fontSize: '1.5rem' }, }, ], }); ``` We then add the component to our DOM as shown below. One heading uses the base styles while the other heading uses the variant styles on top of the base style: ```javascript <Heading>Pigment CSS</Heading> <Heading variant="success" size="small">Test Styling</Heading> ``` You can also style your components based on runtime values. The `isError` prop value is unknown ahead of time. It’s used to style the heading in a callback function: ```typescript const Heading = styled('div')({ fontSize: "2rem", color: ({ isError }: { isError: boolean }) => (isError ? 'red' : '#9FADBC'), fontWeight: "bold", margin: "1rem", }); ``` The prop value is passed to your component. Here, the heading color is set based on the `isError` value: ```css <Heading isError>Test Styling</Heading> ``` ### Theming with Pigment CSS This is an optional feature that allows you to reuse the same styling values across your application using a theme object. You can make use of the `extendTheme` utility to generate CSS variables from your theme object. First, for type safety, let’s define our theme interface in a `theme.d.ts` file as shown below: ```typescript import { ExtendTheme } from "@pigment-css/react/theme"; declare module "@pigment-css/react/theme" { interface ThemeTokens { colorScheme:{ light: { primary: string; secondary: string; background: string; text: string; error: string; }; dark: { primary: string; secondary: string; background: string; text: string; error: string; }; } } interface ThemeArgs { theme: ExtendTheme<{ colorScheme: "light" | "dark" tokens: ThemeTokens }>; } } ``` In the code snippet above, we changed the Pigment CSS theme module. We added our `ThemeTokens`, which need to match our theme object. We then redefined the `ThemeArgs` with our color scheme and tokens. Now, we’ll set the theme module to `include` in the TypeScript config file: ```typescript "include": ["theme.d.ts","next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"], ``` In our `next.config` file, we added a theme option to the `withPigment` function. We also used the `extendTheme` utility to generate CSS variables for our theme: ```javascript import { withPigment, extendTheme } from '@pigment-css/nextjs-plugin'; const nextConfig = {}; export default withPigment(nextConfig, { theme: extendTheme({ colorScheme: { light: { primary: "#9FADBC", secondary: '#23AD79', background: '#fff', text: '#000', error: '#CC3131' }, dark: { primary: "#9FADBC", secondary: '#23AD79', background: '#1D2125', text: '#fff', error: '#CC3131' }, } }), }); ``` We can use the generated theme variables in the `css` or `styled` APIs as shown. You can apply styles based on the color scheme by using the `prefers-color-scheme` media query or the `applyStyles` functions attached to `extendTheme`: ```javascript // template syntax const bodyBackground = css` background-color: ${({ theme }) => theme.colorScheme.dark.background}; color: ${({ theme }) => theme.colorScheme.dark.text}; @media (prefers-color-scheme: light) { background-color: ${({ theme }) => theme.colorScheme.light.background}; color: ${({ theme }) => theme.colorScheme.light.text}; } `; // object syntax const bodyBackground = css(({ theme }) => ({ ...theme.applyStyles("light",{ backgroundColor: theme.colorScheme.light.background, color: theme.colorScheme.light.text, }), ...theme.applyStyles("dark",{ backgroundColor: theme.colorScheme.dark.background, color: theme.colorScheme.dark.text, }) }) ) ``` ## Conclusion This tutorial explored the new CSS-in-JS library by MUI, Pigment CSS, a zero-runtime library. We discussed its key features and compared it to common CSS-in-JS libraries like Emotion and styled-components. We also went through some basic Pigment CSS library usage such as using the `css` and `styled` APIs and theming. Pigment CSS is still in the early release as of writing this article. The library will have more features and improvements as it grows. Happy coding!
leemeganj
1,910,611
Setting Up a Backend with Prisma, Express, and PostgreSQL
Hi, In this post, I will go through the steps of setting up a backend using Prisma, Express, and...
0
2024-07-03T19:03:33
https://dev.to/ajor-saha/setting-up-a-backend-with-prisma-express-and-postgresql-482e
javascript, express, prisma, postgres
Hi, In this post, I will go through the steps of setting up a backend using Prisma, Express, and PostgreSQL. We will cover the installation, configuration, and implementation of a REST API and basic CRUD operations. **Prerequisites** 1. Node.js installed on your machine 2. PostgreSQL installed and running 3. Basic understanding of JavaScript and SQL **Step 1: Initialize the Project** First, create a new directory for your project and navigate into it. Then, initialize a new Node.js project. ``` mkdir my-backend cd my-backend npm init -y ``` **Step 2: Install Dependencies** Install the necessary dependencies for Express and other packages ``` npm install express bcryptjs jsonwebtoken dotenv cookie-parser npm install -D nodemon ``` **Step 3: Set Up Prisma** ``` npm install prisma --save-dev npx prisma init ``` **Step 4: Configure PostgreSQL** Update your .env file with your PostgreSQL database connection URL. ``` DATABASE_URL="postgresql://postgres:jondoe@localhost:5432/backendprisma?schema=public" ``` Update the schema.prisma file to use PostgreSQL and define your models. ``` generator client { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model User { id Int @id @default(autoincrement()) name String? email String @unique password String? post Post[] comment Comment[] created_at DateTime @default(now()) } model Post { id Int @id @default(autoincrement()) user User @relation(fields: [user_id], references: [id], onDelete: Cascade) comment Comment[] user_id Int title String description String comment_count Int @default(0) created_at DateTime @default(now()) } model Comment { id String @id @default(uuid()) post Post @relation(fields: [post_id], references: [id], onDelete: Cascade) post_id Int user User @relation(fields: [user_id], references: [id], onDelete: Cascade) user_id Int comment String created_at DateTime @default(now()) } ``` **Step 5: Migrate the Database** Run the following command to create the database tables based on your Prisma schema. ``` npx prisma migrate dev --name init ``` **Step 6: Install and generate Prisma Client** ``` npm install @prisma/client ``` **Step 7: Set Up Express Server ** Create an server.js file for your Express server. ``` import "dotenv/config"; import cookieParser from "cookie-parser"; import express from "express"; const app = express(); const PORT = process.env.PORT || 4000; // * Middleware app.use(express.json()); app.use(express.urlencoded({ extended: false })); app.use(cookieParser()); app.get("/", (req, res) => { return res.send("Hi Everyone."); }); // * routes file import userRouter from "./routes/userRoutes.js" import postRouter from "./routes/postRoutes.js" import commentRouter from "./routes/commentRoutes.js" app.use("/api/user", userRouter); app.listen(PORT, () => console.log(`Server is running on PORT ${PORT}`)); ``` **Step 8: Implement User Authentication ** Create authentication routes and controllers. routes/userRoutes.js ``` import { Router } from "express"; import { createUser, deleteUser, loginUser, logoutUser, updateUser } from "../controllers/userController.js"; import { verifyJWT } from "../middleware/auth.middleware.js"; const router = Router(); router.post("/adduser", createUser); router.post("/login", loginUser); router.post("/logout", verifyJWT, logoutUser); router.put("/update", verifyJWT, updateUser); router.delete("/delete", verifyJWT, deleteUser); export default router; ``` controllers/userController.js ``` import prisma from "../DB/db.config.js"; import { ApiResponse } from "../utils/ApiResponse.js"; import bcrypt from "bcryptjs"; import jwt from "jsonwebtoken"; export const createUser = async (req, res) => { const { name, email, password } = req.body; try { // Validate that all fields are provided if ( [name, email, password].some((field) => !field || field.trim() === "") ) { return res .status(400) .json(new ApiResponse(false, 400, {}, "All fields are required")); } // Check if the user already exists const findUser = await prisma.user.findUnique({ where: { email: email, }, }); if (findUser) { return res .status(400) .json( new ApiResponse( false, 400, {}, "Email already taken. Please use another email." ) ); } // Hash the password const hashedPassword = await bcrypt.hash(password, 10); // Create the new user const newUser = await prisma.user.create({ data: { name: name, email: email, password: hashedPassword, }, }); // Exclude password from the response const { password: _, ...userWithoutPassword } = newUser; return res .status(201) .json( new ApiResponse( true, 201, userWithoutPassword, "User registered successfully" ) ); } catch (error) { console.error(error); return res .status(500) .json(new ApiResponse(false, 500, null, "Internal Server Error")); } }; export const loginUser = async (req, res) => { const { email, password } = req.body; try { // Validate that all fields are provided if ([email, password].some((field) => !field || field.trim() === "")) { return res .status(400) .json( new ApiResponse(false, 400, {}, "Email and password are required") ); } // Check if the user exists const user = await prisma.user.findUnique({ where: { email: email, }, }); if (!user) { return res .status(401) .json(new ApiResponse(false, 401, {}, "Invalid email or password")); } // Check if the password is correct const isPasswordValid = await bcrypt.compare(password, user.password); if (!isPasswordValid) { return res .status(401) .json(new ApiResponse(false, 401, {}, "Invalid email or password")); } // Generate JWT access token const accessToken = jwt.sign( { userId: user.id, email: user.email }, process.env.JWT_SECRET, { expiresIn: "1d", } ); // Exclude password from the response const { password: _, ...userWithoutPassword } = user; // Set the access token as a cookie res.cookie("accessToken", accessToken, { httpOnly: true, secure: true, // Ensure secure cookies maxAge: 24 * 60 * 60 * 1000, // 1 day }); return res .status(200) .json( new ApiResponse( true, 200, { user: userWithoutPassword, accessToken }, "Login successful" ) ); } catch (error) { console.error(error); return res .status(500) .json(new ApiResponse(false, 500, null, "Internal Server Error")); } }; // update the user export const updateUser = async (req, res) => { const { name, password } = req.body; try { const userId = req.user.id; if (!req.user) { return res .status(401) .json(new ApiResponse(false, 401, null, "Unauthorized request")); } const updateData = {}; // Only add fields to updateData if they are provided if (name) { updateData.name = name; } if (password) { const hashedPassword = await bcrypt.hash(password, 10); updateData.password = hashedPassword; } if (Object.keys(updateData).length === 0) { return res .status(400) .json( new ApiResponse(false, 400, null, "No fields provided to update") ); } const updatedUser = await prisma.user.update({ where: { id: Number(userId), }, data: updateData, }); return res .status(200) .json( new ApiResponse(true, 200, updatedUser, "User updated successfully") ); } catch (error) { console.error(error); return res .status(500) .json(new ApiResponse(false, 500, {}, "Internal Server Error")); } }; // * Delete user export const deleteUser = async (req, res) => { try { const userId = req.user?.id; if (!req.user) { return res .status(401) .json(new ApiResponse(false, 401, null, "Unauthorized request")); } // Attempt to delete the user await prisma.user.delete({ where: { id: Number(userId), }, }); return res .status(200) .json(new ApiResponse(true, 200, null, "User deleted successfully")); } catch (error) { console.error(error); return res .status(500) .json(new ApiResponse(false, 500, null, "Internal Server Error")); } }; export const logoutUser = (req, res) => { try { // Check if user is authenticated if (!req.user) { return res .status(401) .json(new ApiResponse(false, 401, null, "Unauthorized request")); } // Clear the cookie res.clearCookie("accessToken", { httpOnly: true, secure: false, // Set to true if using HTTPS in production sameSite: "strict", }); return res .status(200) .json(new ApiResponse(true, 200, null, "User logged out successfully")); } catch (error) { console.error(error); return res .status(500) .json(new ApiResponse(false, 500, null, "Internal Server Error")); } }; ``` middleware/authMiddleware.js ``` import jwt from "jsonwebtoken"; import prisma from "../DB/db.config.js"; import { ApiResponse } from "../utils/ApiResponse.js"; export const verifyJWT = async (req, res, next) => { try { const token = req.cookies?.accessToken || req.header("Authorization")?.replace("Bearer ", ""); if (!token) { return res .status(401) .json(new ApiResponse(false, 401, null, "Unauthorized request")); } const decodedToken = jwt.verify(token, process.env.JWT_SECRET); const user = await prisma.user.findUnique({ where: { id: decodedToken.userId, }, select: { id: true, name: true, email: true, post: true, comment: true, }, }); if (!user) { return res .status(401) .json(new ApiResponse(false, 401, null, "Invalid Access Token")); } req.user = user; next(); } catch (error) { return res .status(401) .json( new ApiResponse( false, 401, null, error.message || "Invalid access token" ) ); } }; ``` 'utils/ApiResponse.js' ``` class ApiResponse { constructor(success, status, data, message) { this.success = success; this.status = status; this.data = data; this.message = message; } } export { ApiResponse }; ``` _Please note that there might be errors in this post and provided code, so refer to the official documentation, YouTube tutorials, and the GitHub repository for the complete and error-free version._ Prisma: {% embed https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch/relational-databases-node-postgresql %} Youtube Tutorial: {% embed https://youtu.be/GReoWKUnwAg?si=7X6K_hyxQ2Tqg5YY %} Sample-Code : {% embed https://github.com/Ajor-Saha/prisma-express-postgresql %}
ajor-saha
1,910,063
Module Federation: Building a Micro Frontends Solution in 2024
In this blog, we’ll use Module Federation and Bit to implement a runtime integration of Micro...
0
2024-07-03T19:01:10
https://medium.com/p/df10edf0e8d0
javascript, webdev, typescript, frontend
In this blog, we’ll use [Module Federation](https://module-federation.io/) and [Bit](https://bit.cloud) to implement a runtime integration of Micro Frontends. We'll cover the following topics: 1. Using Bit’s templates to generate host applications and remote modules 2. Managing ModFed shared dependencies as a Bit component 3. Creating a β€œplugin system” via [shared types](https://bit.cloud/bit-bazaar/shell-app/types/navigation/~api-reference) that allows remote modules to integrate into the host application in various ways 4. Running remote modules in development within the context of their host application ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*IhWUeHzGmF-peQNZccFn_A.png) > [The deployed solution](https://bit-bazaar.netlify.app/) <br> ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*YcwnsOHOAQ6aufcGQWEnCw.png) > [The Bit org that maintains the MFEs solution](https://bit.cloud/bit-bazaar/~scopes) ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*6nlxFEHjxyt3He3DICxWJQ.png) > [The repos maintaining the Bit workspaces for the shared components, host app, and remote modules](https://github.com/orgs/bit-bazaar/repositories) <br> ## Host Apps and Remote Modules The host app and remote modules were generated using pre-configured templates (made available by Bit): ```bash npx @teambit/bvm install # install Bit bit init my-modfed-solution # create a new Bit workspace cd my-modfed-solution ``` Add the following to your `workspace.jsonc` to make the ModFed templates available in your workspace: ```json "teambit.generator/generator": { "envs": [ "frontend.module-federation/envs/mf-react-env" ] } ``` Run the following commands: ```bash bit install # install the workspace dependnecies bit create modfed-remote-mfe storefront # generate a remote module bit create modfded-remote-mfe blog bit create modfed-host-app shell-app # generate a host app ``` > Run `bit templates` to list the available ModFed templates To list the available apps (and remote modules), run: ```bash bit app list ``` The output lists the component IDs and their corresponding app names: ```plaintext β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id β”‚ name β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ bit-bazaar.storefront/storefront β”‚ storefront β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ bit-bazaar.blog/blog β”‚ blog β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ bit-bazaar.shell-app/shell-app β”‚ shell-app β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` You can run the apps locally by using their app name: ```bash bit run storefront ``` ## Shared Dependencies Our solution consists of many shared dependencies configured to be excluded from the app bundles and loaded as separate chunks. This is one of ModFed’s strengths. It allows us to optimize our bundle size, maintain consistency, and avoid conflicts between versions of the same module. Our shared dependencies are maintained as a Bit component shared across projects (the host app and the remote modules). This allows teams to maintain consistency while working independently. The list of shared dependencies consists primarily of runtime libs and a design system: ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*SDBgYqyMyiTVR3MmXrCVtw.png) > [The β€˜shared dependencies’ component](https://bit.cloud/bit-bazaar/shell-app/shared-dependencies/~code/shared-dependencies.ts) (which lists the shred deps) is used by the host app config and remote modules config ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*owibIMnKJ0s749r8DEwWvw.png) > [https://bit.cloud/bit-bazaar/shell-app/shared-dependencies/~code/shared-dependencies.ts](https://bit.cloud/bit-bazaar/shell-app/shared-dependencies/~code/shared-dependencies.ts) For example: ```typescript /** * @filename: storefront.bit-app.ts * @component-id: bit-bazaar.storefront/storefront */ import { MfReact } from '@frontend/module-federation.react.apps-types.mf-rspack'; /* import the 'shared dependnecies' components */ import { shellAppSharedDependencies } from '@bit-bazaar/shell-app.shared-dependencies'; export default MfReact.from({ name: 'storefront', clientRoot: './storefront.app-root.js', moduleFederation: { exposes: { // ... }, shared: shellAppSharedDependencies, } }); ``` ## A Shared Design System Our component library and theme are based on Material UI. They are maintained in the β€œdesign” scope and shared across Micro Frontends. ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*VNkzhuxM7aQiLslASgYs9A.png) > [The β€œdesign” scope which contains the shared UI components and themes](https://bit.cloud/bit-bazaar/design) ## Shared Context The β€˜Theme Provider,’ β€˜Auth Provider,’ and other context components are part of the β€œhost app” or β€œshell app.” As such, they are maintained by the β€œshell app” team. ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*gzRW_UPh8FVNg6AdunoDOg.png) > The context for all MFEs (provided by the [shell app](https://bit.cloud/bit-bazaar/shell-app/shell-app/~code/shell-app.tsx)) Teams working on MFEs do not need to bother with authentication, authorization, or any other shared functionality. The β€œhost” or β€œshell” team provides it all to them. For example, if team Storefront needs to implement functionality based on the user auth, they would explore the β€˜shell app’ scope and look for the proper β€œSDK”. ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*nygdqeWAhwyCEkAhZSH8Ww.png) > [The β€œauth” context and hook](https://bit.cloud/bit-bazaar/shell-app/auth/auth-provider/~code) ## Routing and Navigation The shell app provides a sort of β€œplugin system” where Micro Frontends (remote modules) can integrate into it in ways that go beyond a simple link. It does so by providing the types for each β€œplugin”. ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*LNIly_jIcZHgyl67TluvKg.png) > [The shared β€œnavigation item” type](https://bit.cloud/bit-bazaar/shell-app/types/navigation) For example, a remote module can implement a β€œnavigation item” interface that includes its navigation options. ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*O5Z7c-movW0_IlQZ-W4kMw.png) This can then be exposed for the shell app (which will load it at runtime): ```typescript /** * @filename: blog.bit-app.ts * @component-id: bit-bazaar.blog/blog */ export default MfReact.from({ name: 'blog', clientRoot: './blog.app-root.js', moduleFederation: { exposes: { /** * the MFE navigation exposed to be loaded * by the shell app at runtime **/ './blognav': './navitem.js', /** * the main chunk of the 'blog' MFE **/ './blog': './blog.js', }, shared: shellAppSharedDependencies, }, deploy: Netlify.deploy(netlifyConfig), }); ``` The routing is handled at the level that suits the module. For example, the shell app only handles routing to `/blog/*` and `/storefront/*`. It does not determine the routing β€œinside” each MFE (such as `storefront/products`). ```typescript /** * @filename: shell-app.tsx * @component-id: bit-bazaar.shell-app/shell-app */ export function ShellApp() { return ( <BrowserRouter> <Routes> <Route path="/" element={<Layout />}> <Route index element={<Homepage />} /> <Route path="store/*" element={<Store />} /> <Route path="blog/*" element={<Blog />} /> <Route path="*" element={<div>Not Found</div>} /> </Route> </Routes> </BrowserRouter> ); ``` Accordingly, remote modules, such as the `blog`, are not responsible for the `/blog/*` routing (the routing to the blog MFE)β€”only for nested routes. ```typescript /** * @filename: blog.tsx * @component-id: bit-bazaar.blog/blog */ export function Blog() { return ( <Routes> <Route path="articles" element={<ArticlesPage />} /> <Route path="categories" element={<CategoriesPage />} /> </Routes> ); } ``` ## DevX For the ultimate dev experience, each team uses a β€œPlatform” component to consume an immutable version of the shell app and possibly other remote modules. This provides the MFEs with the proper context to run in development. It ensures a consistent and seamless dev experience while properly enforcing permissions and access control (e.g., the 'blog' team cannot modify the 'storefront' MFE or the shell app). ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*NU2_MVKTnTMHAEfUr_5QBw.png) > [https://bit.cloud/bit-bazaar/storefront/storefront-platform/~code/shell-platform.bit-app.ts](https://bit.cloud/bit-bazaar/storefront/storefront-platform/~code/shell-platform.bit-app.ts) ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*kxmzE2RnzgLq70WKtnKPSw.png) > The 'shell-app' as an immutable dependency of the β€˜storefront-platform’ used by the β€˜storefront’ team for β€˜storefront’ development in full context For example, the β€˜storefront’ team are able to run the β€˜storefront’ MFE in its full context (shell app and even other MFEs) by running the 'platform' app maintained by them (for development only): ``` bit run storefront-platform ``` ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*990s5cPwdR9-tYjigxgNbw.png) You can generate a 'Platform' component using the template provided by the ModFed env (the one configured as a generator at the beginning of this blog): ``` bit create modfed-platform my-platform ```
giteden
1,910,614
Automating User Management with Bash Scripting
As a SysOps engineer, managing user accounts efficiently is crucial. In this article, we will discuss...
0
2024-07-03T19:00:31
https://dev.to/maximo20/automating-user-management-with-bash-scripting-34an
As a SysOps engineer, managing user accounts efficiently is crucial. In this article, we will discuss how to automate the creation of users and groups using a bash script (create_users.sh). **Introduction** The create_users.sh script automates the creation of user accounts based on input from a text file. It handles the setup of home directories, password generation, group management, and logging, making it ideal for scaling user management tasks. This script is a practical example of how to streamline user management processes while ensuring security and compliance. **Script Overview** Parsing Input The script reads from a formatted text file where each line specifies a username and associated groups separated by semicolons. This approach allows for easy configuration and bulk user management. **User and Group Creation** The script checks if users and groups already exist, creates them if they don't, and assigns appropriate permissions and ownerships. This ensures that new users are correctly set up and integrated into the system. **Password Security** Passwords are securely generated using OpenSSL's base64 random generator and stored in /var/secure/user_passwords.txt, accessible only to the script owner. This step is crucial for maintaining the confidentiality of user credentials. **Error Handling** The script gracefully handles errors such as existing users or groups and logs these events to /var/log/user_management.log for audit purposes. This ensures transparency and accountability in user management. Step-by-Step Implementation Initialize Variables and Files The script starts by defining log and password file paths and ensuring they exist with the correct permissions: `#!/bin/bash # Ensure the secure directory exists sudo mkdir -p /var/secure sudo touch /var/secure/user_passwords.txt LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure log and password files exist sudo touch "$LOG_FILE" sudo touch "$PASSWORD_FILE" # Set permissions to secure the password file sudo chmod 600 "$PASSWORD_FILE" ` **Processing Each Line** The script processes each line of the input file, trimming whitespace and extracting usernames and groups: `while IFS=';' read -r username groups || [ -n "$username" ]; do username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Debugging line to understand how each line is processed echo "Processing: username=$username, groups=$groups" ` User and Group Creation It checks if users and groups already exist and creates them if necessary: ``` if id "$username" &>/dev/null; then echo "$(date) - User '$username' already exists. Skipping creation." | sudo tee -a "$LOG_FILE" continue fi if ! getent group "$username" &>/dev/null; then echo "$(date) - Creating group '$username'." | sudo tee -a "$LOG_FILE" sudo groupadd "$username" fi ``` Assigning Users to Groups The script assigns users to the specified groups: ``` IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then echo "$(date) - Creating group '$group'." | sudo tee -a "$LOG_FILE" sudo groupadd "$group" fi done echo "$(date) - Creating user '$username'." | sudo tee -a "$LOG_FILE" sudo useradd -m -g "$username" -G "$(IFS=','; echo "${group_array[*]}")" "$username" ``` Password Generation and Assignment Passwords are generated and assigned to users, and securely stored: ``` password=$(openssl rand -base64 12) echo "$username:$password" | sudo chpasswd echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" echo "$(date) - User '$username' created and assigned to groups: ${group_array[*]}." | sudo tee -a "$LOG_FILE" done < "$1" echo "$(date) - Script execution completed." | sudo tee -a "$LOG_FILE" ``` Security Considerations User passwords are generated securely and stored in a protected file to prevent unauthorized access. Proper permissions ensure that sensitive information remains confidential. Conclusion Automating user management tasks with create_users.sh improves operational efficiency and reduces human error in user account provisioning. This script exemplifies best practices in DevOps for maintaining a secure and organized user environment. By implementing create_users.sh, SysOps teams can streamline user management processes while ensuring security and compliance. For more details about HNG internship, visit [HNG Internship](https://hng.tech/internship) or click to know more https://hng.tech
maximo20
1,905,952
Why Appwrite Is Your Ideal BaaS in 2024
In the competitive world of Software as a Service (SaaS) products, speed to market can make or break...
0
2024-07-03T19:00:00
https://dev.to/alexindevs/why-appwrite-is-your-ideal-baas-in-2024-im-5aom
appwritehack, backend, javascript, saas
In the competitive world of Software as a Service (SaaS) products, speed to market can make or break your startup. You're racing against time to build, test, and deploy your application before your competitors do. However, building or integrating various backend services eats up valuable development hours and resources. What if there was a way to handle all your backend needs with a single, cohesive platform? This is where Appwrite comes in. Appwrite enables entrepreneurs to focus on what truly matters – building and scaling their product. In this article, you'll learn about Appwrite's history, features, and why it's the best choice for any entrepreneur building to scale in 2024. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7si9fyh8z7as3uh4qr29.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7si9fyh8z7as3uh4qr29.png) ## What is Appwrite? [Appwrite](https://appwrite.io/) is a comprehensive Backend as a Service (BaaS) platform designed to help developers build and scale applications quickly and efficiently. Whether you're a solo indie hacker or part of a growing startup, Appwrite provides the essential features you needβ€”database management, authentication, storage, and cloud functionsβ€”all in one unified platform. Appwrite was founded in 2019 by Eldad Fux, a software engineer and entrepreneur. The initial vision was to create an open-source platform that simplifies backend development, making it more accessible and efficient for developers of all skill levels. Eldad noticed that many developers struggled with setting up and managing backend services, often juggling multiple tools and services to meet their needs. He envisioned Appwrite as an all-in-one solution that would streamline this process, enabling developers to focus on building their applications rather than dealing with the complexities of backend development. ## Key Features Appwrite's main selling point is its comprehensive suite of backend services. Here is a list of its features: - **Authentication**: Think about the time and effort it takes to set up a secure and flexible authentication system for your users. Appwrite simplifies this process by providing built-in support for multiple authentication methods, including email/password, Magic URL, JWT, OAuth providers (like Google, Facebook, and GitHub), and even anonymous login. This means you can offer your users a variety of login options, enhancing their experience while ensuring security. But Appwrite doesn't stop there. It also supports two-factor authentication (2FA) and robust session management, giving your users an extra layer of security and allowing them to maintain their login status across sessions and devices. Whether you're building a social app that needs social logins or an enterprise solution requiring strict access controls, Appwrite has you covered. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0szho8k71cybojq4qzdl.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0szho8k71cybojq4qzdl.png) - **Database Management**: Appwrite simplifies database management for developers by offering robust support for both SQL and NoSQL databases within a unified platform. Whether you're launching a new startup or scaling an existing SaaS application, Appwrite allows you to effortlessly create and manage databases, collections, and documents. This means you can focus more on crafting innovative features for your application and less on the complexities of backend infrastructure. With Appwrite, scaling your database to accommodate growing user bases and increasing data volumes becomes seamless, ensuring your application maintains high performance and reliability as it evolves. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d604vh5dg14lxrf9xzj3.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d604vh5dg14lxrf9xzj3.png) - **File Storage**: Appwrite simplifies file storage and management by seamlessly integrating with Content Delivery Networks (CDNs). Whether you're dealing with images, videos, or documents, Appwrite ensures that your multimedia content is securely stored and efficiently delivered to users worldwide. This CDN integration not only enhances the speed and reliability of content delivery but also reduces latency, improving the overall user experience. For SaaS applications that heavily rely on multimedia content, Appwrite's scalable file storage capabilities ensure efficient content management and delivery without concerns about storage limitations or performance bottlenecks. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5434prslcz9ox9h6v3tr.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5434prslcz9ox9h6v3tr.png) - **Serverless Functions**: Serverless functions are a cloud computing execution model that allows developers to run code without having to manage their own servers. These functions are typically small, stateless pieces of code that are triggered by specific events, such as HTTP requests, database changes or file uploads. Appwrite gives developers the power of serverless computing, freeing them from server management to focus on writing business logic. On Appwrite, serverless functions can be triggered by different events like database changes and HTTP requests, enabling the creation of responsive and scalable applications. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ponm4m8toar81g0cj0nm.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ponm4m8toar81g0cj0nm.png) ## Why choose Appwrite? Choosing the right backend-as-a-service (BaaS) platform is crucial for the success of your SaaS application. Here’s a direct comparison of Appwrite with other popular BaaS platforms like Firebase and AWS Amplify, highlighting their unique strengths and ideal use cases. By examining these comparisons, you can make an informed decision on the best BaaS solution for your project. | Feature/Platform | Appwrite | Firebase | AWS Amplify | | -------------------- | ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | | Platform Focus | Unified platform for essential backend services: database, authentication, file storage, serverless functions | Real-time database capabilities and Google services integration | Comprehensive cloud services integration with AWS, including AI/ML and IoT support | | Development Approach | Simplifies development with a cohesive approach, reducing multiple service integrations | Excels in real-time synchronization, mobile development | Suitable for enterprise-level applications needing extensive scalability and complex cloud integrations | | Ideal Use Cases | Applications needing robust multimedia content management and scalable serverless architectures | Applications requiring extensive real-time synchronization and mobile development | Enterprise-level applications needing extensive scalability and complex cloud integrations | ## Setting Up Appwrite in Your Application To demonstrate the power and ease of using Appwrite, this article will take you through the process of setting up Appwrite for a web application built in React. ### Prerequisites Make sure you have the following: 1. [Node.js](https://nodejs.org/en) and npm installed on your machine. 2. Basic knowledge of [React.js](https://react.dev/). ### Step 1: Setting Up Appwrite via Browser 1. **Access the Appwrite Console:** - Open your web browser and navigate to [Appwrite Console](https://appwrite.io/). - If you don't have an account, sign up for free. Otherwise, log in to your existing account. 2. **Create a New Project:** - Once logged in, click on "Projects" in the sidebar. - Click the "Create a New Project" button. - Enter a name for your project (e.g., "Appwrite Demo Project") and optionally a description. - Click "Create Project" to proceed. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwlgrmnpka1c9ap21u6l.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwlgrmnpka1c9ap21u6l.png) 3. **Get Your Project ID and API Keys:** - After creating your project, click on its name to enter the project dashboard. - Take note of your Project ID, which you will need to configure the Appwrite SDK in your application later. ![https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8hx5yblf7jocn0taegf.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8hx5yblf7jocn0taegf.png) - Navigate to the API Keys tab under Project Settings. - Create a new API key with appropriate permissions (e.g., full access for demo purposes). - Copy the API key value securely. You will use this in your application to authenticate API requests. ### Step 2: Create a New React Project To create a new React.js project, run this in your terminal: ```bash $ npx create-react-app appwrite-demo $ cd appwrite-demo ``` ### Step 3: Install Appwrite SDK To interact with Appwrite from your React application, you need to install the Appwrite SDK. Run the following command: ```bash $ npm install appwrite ``` ### Step 4: Initialize Appwrite in Your React App Open your `src/App.js` file and initialize the Appwrite client: ```jsx import React, { useEffect } from 'react'; import { createClient } from 'appwrite'; function App() { useEffect(() => { const client = createClient({ endpoint: '<https://appwrite.io/v1>', // Replace with your Appwrite Endpoint project: 'YOUR_PROJECT_ID', // Replace with your Project ID apiKey: 'YOUR_API_KEY', // Replace with your API Key }); // Example: Create a new user client.account.create('[email protected]', 'password', 'username') .then(response => { console.log('User created:', response); }) .catch(error => { console.error('Error creating user:', error); }); }, []); return ( <div className="App"> <h1>Welcome to Appwrite Demo</h1> </div> ); } export default App; ``` ### Step 5: Run Your React App Finally, start your React application by running: ```bash $ npm start ``` Open your browser and navigate to `http://localhost:3000` to see your application in action. You should see the message "Welcome to Appwrite Demo" and, in your console, a response indicating the creation of a new user. ## Conclusion As we’ve explored, Appwrite stands out as an ideal backend solution for SaaS startups in 2024 due to its comprehensive platform that simplifies backend development. By offering essential services like database management, authentication, file storage, and serverless functions in one cohesive package, Appwrite eliminates the need for multiple service integrations, saving valuable development time and resources. Its open-source nature, robust security features, and scalability make it a reliable choice for both indie hackers and growing startups. With Appwrite, developers can focus on what truly matters: building and scaling their applications to meet market demands swiftly and efficiently. Thanks for reading!
alexindevs