id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,913,181
Service Discovery in Microservices With .NET and Consul
Microservices have revolutionized how we build and scale applications. By breaking down larger...
0
2024-07-07T20:38:18
https://www.milanjovanovic.tech/blog/service-discovery-in-microservices-with-net-and-consul
microservices, dotnet, consul, servicediscovery
--- title: Service Discovery in Microservices With .NET and Consul published: true date: 2024-07-06 00:00:00 UTC tags: microservices,dotnet,consul,servicediscovery canonical_url: https://www.milanjovanovic.tech/blog/service-discovery-in-microservices-with-net-and-consul cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2k04b0gmyb5lj4h8s24.png --- Microservices have revolutionized how we build and scale applications. By breaking down larger systems into smaller, independent services, we gain flexibility, agility, and the ability to adapt to changing requirements quickly. However, microservices systems are also very dynamic. Services can come and go, scale up or down, and even move around within your infrastructure. This dynamic nature presents a significant challenge. How do your services find and communicate with each other reliably? Hardcoding IP addresses and ports is a recipe for fragility. If a service instance changes location or a new instance spins up, your entire system could grind to a halt. Service discovery acts as a central directory for your microservices. It provides a mechanism for services to register themselves and discover the locations of other services. In this week's issue, we'll see how to implement service discovery in your .NET microservices with Consul. ## What is Service Discovery? Service discovery is a pattern that allows developers to use logical names to refer to external services instead of physical IP addresses and ports. It provides a centralized location for services to register themselves. Clients can query the service registry to find out the service's physical address. This is a common pattern in large-scale distributed systems, such as Netflix and Amazon. Here's what the service discovery flow looks like: 1. The service will register itself with the service registry 2. The client must query the service registry to get the physical address 3. The client sends the request to the service using the resolved physical address ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5lk44ytit7dvt694kpl.png) The same concept applies when we have multiple services we want to call. Each service would register itself with the service registry. The client uses a logical name to reference a service and resolves the physical address from the service registry. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyjq7sf9zh3yacn771cw.png) The most popular solutions for service discovery are Netflix [Eureka](https://github.com/Netflix/eureka) and HashiCorp [Consul](https://www.consul.io/). There is also a lightweight solution from Microsoft in the `Microsoft.Extensions.ServiceDiscovery` library. It uses application settings to resolve the physical addresses for services, so some manual work is still required. However, you can store service locations in [Azure App Configuration](https://azure.microsoft.com/en-us/products/app-configuration) for a centralized service registry. I will explore this service discovery library in some future articles. But now I want to show you how to integrate Consul with .NET applications. ## Setting Up the Consul Server The simplest way to run the Consul server locally is using a Docker container. You can create a container instance of the `hashicorp/consul` image. Here's an example of configuring the Consul service as part of the `docker-compose` file: ```yml consul: image: hashicorp/consul:latest container_name: Consul ports: - '8500:8500' ``` If you navigate to `localhost:8500`, you will be greeted by the Consul Dashboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m569bwi21t8pln5lrvua.png) Now, let's see how to register our services with Consul. ## Service Registration in .NET With Consul We'll use the [Steeltoe Discovery](https://docs.steeltoe.io/api/v3/discovery/) library to implement service discovery with Consul. The Consul client implementation lets your applications register services with a Consul server and discover services registered by other applications. Let's install the `Steeltoe.Discovery.Consul` library: ```csharp Install-Package Steeltoe.Discovery.Consul ``` We have to configure some services by calling `AddServiceDiscovery` and explicitly configuring the Consul service discovery client. The alternative is calling `AddDiscoveryClient` which uses reflection at runtime to determine which service registry is available. ```csharp using Steeltoe.Discovery.Client; using Steeltoe.Discovery.Consul; var builder = WebApplication.CreateBuilder(args); builder.Services.AddServiceDiscovery(o => o.UseConsul()); var app = builder.Build(); app.Run(); ``` Finally, our service can register with Consul by configuring the logical service name through application settings. When the application starts, the `reporting-service` logical name will be added to the Consul service registry. Consul will store the respective physical address of this service. ```json { "Consul": { "Host": "localhost", "Port": 8500, "Discovery": { "ServiceName": "reporting-service", "Hostname": "reporting-api", "Port": 8080 } } } ``` When we start the application and open the Consul dashboard, we should be able to see the `reporting-service` and its respective physical address. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ym13ybu4fus95vvojhka.png) ## Using Service Discovery We can use service discovery when making HTTP calls with an `HttpClient`. Service discovery allows us to use a logical name for the service we want to call. When sending a network request, the service discovery client will replace the logical name with a correct physical address. In this example, we're configuring the base address of the `ReportingServiceClient` typed client to `http://reporting-service`and adding service discovery by calling `AddServiceDiscovery`. Load balancing is an optional step, and we can configure it by calling `AddRoundRobinLoadBalancer` or `AddRandomLoadBalancer`. You can also configure a custom load balancing strategy by providing an `ILoadBalancer` implementation. ```csharp builder.Services .AddHttpClient<ReportingServiceClient>(client => { client.BaseAddress = new Uri("http://reporting-service"); }) .AddServiceDiscovery() .AddRoundRobinLoadBalancer(); ``` We can use the `ReportingServiceClient` typed client like a regular `HttpClient` to make requests. The service discovery client sends the request to the external service's IP address. ```csharp app.MapGet("articles/{id}/report", async (Guid id, ReportingServiceClient client) => { var response = await client .GetFromJsonAsync<Response>($"api/reports/article/{id}"); return response; }); ``` ## Takeaway Service discovery simplifies the management of microservices by automating service registration and discovery. This eliminates the need for manual configuration updates, reducing the risk of errors. Services can discover each other's locations on demand, ensuring that communication channels remain open even as the service landscape evolves. By enabling services to discover alternative service instances in case of outages or failures, service discovery enhances the overall resilience of the microservices system. Mastering service discovery gives you a powerful tool to build modern distributed applications. You can grab the [source code for this example here](https://github.com/m-jovanovic/service-discovery-consul). Thanks for reading, and I'll see you next week! * * * **P.S. Whenever you're ready, there are 3 ways I can help you:** 1. [**Pragmatic Clean Architecture:**](https://www.milanjovanovic.tech/pragmatic-clean-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 2,900+ students in this comprehensive course that will teach you the system I use to ship production-ready applications using Clean Architecture. Learn how to apply the best practices of modern software architecture. 2. [**Modular Monolith Architecture:**](https://www.milanjovanovic.tech/modular-monolith-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 750+ engineers in this in-depth course that will transform the way you build modern systems. You will learn the best practices for applying the Modular Monolith architecture in a real-world scenario. 3. [**Patreon Community:**](https://www.patreon.com/milanjovanovic) Join a community of 1,050+ engineers and software architects. You will also unlock access to the source code I use in my YouTube videos, early access to future videos, and exclusive discounts for my courses.
milanjovanovictech
1,913,230
HackerRank Beautiful 3 Set Problem Solution
HackerRank Beautiful 3 Set Problem Solution GitHub_ror2022 CV_ror2022 LinkedIn portfolio We will...
0
2024-07-05T23:05:52
https://dev.to/ror2022/hackerrank-beautiful-3-set-problem-solution-13fp
algorithms, javascript, node, problemsolve
HackerRank Beautiful 3 Set Problem Solution [GitHub_ror2022](https://github.com/ROR2022) [CV_ror2022](https://docs.google.com/document/d/104ek8dOTdOU6RcDMtGT-g1T--FWxq2earIDvMZoQ79E/edit?usp=sharing) [LinkedIn](https://www.linkedin.com/in/ramiro-ocampo-5a661b1a7/) [portfolio](https://prodigy-wd-05-kappa.vercel.app/#/portfolio) We will solve the HackerRank Beautiful 3 Set problem. Given an integer nnn, a set SSS of triples (x,y,z)(x, y, z)(x,y,z) is beautiful if and only if: 0<=Xi,Yi,Zi Xi + Yi + Zi = n Let X be the set of different x's in S, Y be the set of different y's in S, and Z be the set of different z's in S. Then ∣X∣=∣Y∣=∣Z∣=∣S∣ The third condition means that all values of x, y, and z are pairwise distinct. Given n, find any beautiful set having the maximum number of elements. Then, print the cardinality of S (i.e., ∣S∣) on a new line, followed by ∣S∣ lines where each line contains 3 space-separated integers describing the respective values of x_i​, y_i​, and z_i. Input Format A single integer, n. Output Format On the first line, print the cardinality of S (i.e., ∣S∣). For each of the ∣S∣ subsequent lines, print three space-separated numbers per line describing the respective values of x_i​, y_i​, and z_i​ for triple i in S. Sample Input 3 Sample Output 3 0 1 2 2 0 1 1 2 0 Solution Analysis We need a function in JavaScript that returns the combination of valid triples given a number n. This combination should satisfy: Each of its triples (x,y,z) satisfies x+y+z=n. The set of x values are distinct, the set of y values are distinct, and the set of z values are distinct. The cardinality k is the length of the combination of triples that meets these conditions. Examples Analysis For n=1: Cardinality: 1 Triples: 0 1 0 For n=2: Cardinality: 2 Triples: 0 2 0 1 0 1 For n=3: Cardinality: 3 Triples: 0 1 2 1 2 0 2 0 1 For n=4: Cardinality: 3 Triples: 0 1 3 1 2 1 2 0 2 For n=5: Cardinality: 4 Triples: 0 3 2 1 4 0 2 0 3 3 1 1 General Observations From the examples, the pattern for the cardinality k can be observed as: k=n−offset where the offset follows the pattern 0,0,0,1,1,1,2,2,2,3,3,3,… so: offset= Math.floor((n-1)/3) The upper limit of z is k−1 and the lower limit is 0, unless n is part of the sequence 4,7,10,13,16,…where the upper limit becomes k and the lower limit becomes 1. JavaScript Implementation const beatiful3set= (data)=>{ //determ cardinality of the set based on the amount of elements const n=Number(data); //amount of elements const o=Math.floor((n-1)/3); //offset const k=n-o; //cardinality of the set //the cardinality of the set is the amount of triplets // 0<=x<=k-1 //y=n-x-z let evenZ=[]; let oddZ=[]; //first we need to determine if n is in the set 4,7,10,13,16,19,22,25,28,31,34,37,40,43,46,49,52,55..... //create a function to determine if n is in the set const isNInSet=(num)=>{ let temp= (num-1)/3; if(Number.isInteger(temp) && temp>=1){ return true; }else{ return false; } } let upperLimZ=0; let lowerLimZ=0; if(isNInSet(n)===true){ //if n is in the set then upperLimZ=k; lowerLimZ=1; }else{ //if n is not in the set then upperLimZ=k-1; lowerLimZ=0; } for(let c=upperLimZ;c>=lowerLimZ;c--){ if(c%2===0){ evenZ.push(c); }else{ oddZ.push(c); } } //complete the set z let completeZ=[]; if(evenZ.length>=oddZ.length){ completeZ=[...evenZ,...oddZ]; }else{ completeZ=[...oddZ,...evenZ]; } let triplets=[]; for(let x=0;x<k;x++){ let z=completeZ[x]; let y=n-x-z; triplets.push([x,y,z]); } //output console.log('Input n:',n); console.log('cardinality:',k); triplets.forEach(triplet=>{ console.log(triplet.join(' ')); }); } This function calculates the valid combination of triples for a given n and prints the cardinality and the triples themselves. The algorithm is designed to ensure that the conditions of the problem are met and provides a clear, efficient solution. [GitHub_ror2022](https://github.com/ROR2022) [CV_ror2022](https://docs.google.com/document/d/104ek8dOTdOU6RcDMtGT-g1T--FWxq2earIDvMZoQ79E/edit?usp=sharing) [LinkedIn](https://www.linkedin.com/in/ramiro-ocampo-5a661b1a7/) [portfolio](https://prodigy-wd-05-kappa.vercel.app/#/portfolio)
ror2022
1,913,262
My Journey as a Backend Developer: Tackling Complex Problems and Embracing the HNG Internship
Hey there! I'm a passionate software developer from Nigeria, and today, I want to share a recent...
0
2024-07-05T23:56:18
https://dev.to/oj_redifined/my-journey-as-a-backend-developer-tackling-complex-problems-and-embracing-the-hng-internship-10gd
postgressql, backend, javascript
Hey there! I'm a passionate software developer from Nigeria, and today, I want to share a recent challenge I faced and how I overcame it. I believe that a big part of being a backend developer is not just about writing code, but also about solving problems and continuously learning. Plus, I want to tell you all about the exciting journey I'm about to start with the HNG Internship and why I chose this path. ### The Challenge: Optimizing Database Performance Recently, I was working on a project that required handling a large volume of data in a PostgreSQL database. Everything seemed to be running smoothly until we started experiencing significant performance issues. Queries that used to take milliseconds were now taking seconds, and the overall user experience was deteriorating. ### Step-by-Step Solution #### 1. Identifying the Bottleneck The first step in solving any problem is understanding its root cause. I started by using the PostgreSQL slow query log to identify which queries were taking the most time. It turned out that several of our JOIN operations were causing delays due to the large datasets involved. #### 2. Analyzing and Optimizing Queries Next, I analyzed these slow queries to see if they could be optimized. I found that some of the JOIN operations were unnecessary and could be replaced with simpler subqueries. Additionally, I made sure to use indexes effectively. By adding indexes to the columns used in the WHERE clause and JOIN operations, the query performance improved significantly. #### 3. Implementing Caching Despite optimizing the queries, there were still some performance issues due to the sheer volume of data being processed. To tackle this, I implemented caching. By caching the results of frequently accessed queries, I was able to reduce the load on the database and speed up response times for users. #### 4. Database Sharding For a more long-term solution, I explored database sharding. This involved splitting the large dataset into smaller, more manageable pieces, and distributing them across multiple servers. By doing so, each query had to process less data, further improving performance. ### Lessons Learned This experience taught me a lot about database optimization and the importance of efficient query design. It also reinforced my belief that problem-solving is at the heart of backend development. No matter how complex the issue, there's always a way to break it down and find a solution. ### Embarking on the HNG Internship I'm thrilled to share that I've been accepted into the [HNG Internship](https://hng.tech/internship)! This is a fantastic opportunity for me to learn, grow, and connect with other passionate developers. The HNG Internship is known for its rigorous training and hands-on projects, and I can't wait to dive in. One of the main reasons I chose to join the [HNG Internship](https://hng.tech/internship) is the emphasis on real-world experience. As someone who thrives on tackling challenging problems, I believe this internship will provide me with the perfect platform to sharpen my skills and take my career to the next level. ### Conclusion Being a backend developer is an exciting journey filled with challenges and opportunities for growth. By sharing my experiences and solutions, I hope to inspire others to embrace problem-solving and continuous learning. I'm looking forward to the incredible journey ahead with the HNG Internship and can't wait to see where this adventure takes me! Thank you for reading, and if you're interested in learning more about the HNG Internship or hiring talented developers, check out [HNG Tech](https://hng.tech/hire) for more information.
oj_redifined
1,912,124
How Astro DB reminded me what SSR really means
I've been using Astro recently to build mostly static sites. Currently, I'm working on a server-side...
0
2024-07-05T23:21:25
https://dev.to/fearandesire/how-astro-db-taught-me-what-ssr-really-means-4gfh
astro, ssr, node, learning
I've been using [Astro](https://astro.build/) recently to build mostly static sites. Currently, I'm working on a server-side rendered (SSR), semi-complex website. Since Astro is so enjoyable to use, I chose it for this project. This post sheds light on a subtle but important aspect of using Astro with SSR enabled and Astro DB that isn't immediately apparent from the documentation—and the solution. **Preface:** If you're already familiar with the inner workings of SSR in web development, this post may be obvious! I've worked with other web frameworks that handled this through abstraction or simple configuration. **What I'm using in this project**: - [Astro](https://astro.build/) - Web framework - [Astro DB](https://astro.build/db/) - Database solution - [Vue.js](https://vuejs.org/) - Interactive components # My Goal It was simple—I wanted to retrieve the user profile in my Vue composable file, which would then process and display this information in the Vue component. I figured that was as simple as it was in native Vue. Here's a simplified example of the code I originally had **Astro DB Service Class** ```typescript // db/services/UserProfile.service.ts import { db, UserProfile } from 'astro:db'; export default class UserProfile { static async getUserProfile(userId: string) { const userData = await db.select().from(UserProfile); return userData; } } ``` **Vue Composable** ```typescript // src/composables/useUserProfile.ts import UserProfile from 'db/services/UserProfile' export function useUserProfile(userId: string) { const userData = await UserProfile.getUserProfile(userId) // business logic, etc.. return { userData } } ``` Straightforward, right? # The Problem With my setup, I received error messages that didn't explicitly state what was wrong. Even more puzzling, I got different errors when I changed the command to start the app. I first started via our standard `astro dev` command and navigated to the page that would request the data, but it completely errored out. The page was constantly refreshing with this error: `Internal server error: Failed to resolve import "\db\seed.js"` # Troubleshooting From the error specifying `seed.js`, I assumed I had something wrong with my seed data. What I tried to resolve this: - Modifying seed data and the defined tables - Like all developers: searching Google and Reddit for similar issues - Checked the [Astro GitHub Repo](https://github.com/withastro/astro) for similar issues I couldn't find this error specifically mentioned by anyone. I figured maybe it was a bug with Astro DB, being that it's fairly new at the time of writing this. I tried one more thing: I added the `--remote` flag to the start command, pushed my schema changes, and tried this out on my "production" database. This time, the page loaded, but the data wasn't present. The console showed: `[astro-island] Error hydrating /src/pages/mypage/pageName.vue ReferenceError: process is not defined` # My Solution Let's revisit that preface I gave earlier. I'll say that this made me feel really silly for not noticing. The problem was caused by trying to access the database directly, which makes perfect sense. We're in SSR. This may seem obvious - but again - it's not really **_explicitly_** stated in the Astro DB documentation. It's worth noting that I already had an existing system designed in this codebase - Composable reaches out to the API endpoint (which was an [Astro.js Endpoint](https://docs.astro.build/en/guides/endpoints/)), and the endpoint then reaches out to the DB. I got lazy and wanted to quickly pull this data to see if my Vue page was set up properly - and directly referenced it. Upon reading [this section](https://docs.astro.build/en/guides/astro-db/#insert) in the Astro DB documentation - I noticed that they use an API endpoint to make a request to the database. That's when I realized that was my mistake. I made my changes and made another API endpoint like my others; The API endpoint reached out to the DB. I ran my project again, and it worked fine! # Fin My main goal in sharing this is to hopefully save the time of others who encounter this issue. Looking back, it's a very obvious mistake to make. It's not something I expect to be explicitly stated within the Astro documentation, either. It's implied within this being an SSR context that, like similar frameworks, we need to make a network request to fetch data from a database. Overall, it's part of the developer experience.
fearandesire
1,913,233
Hosting Static Website On Azure Blob Storage.
A static website is an already developed or written code which is done using (in this case Visual...
0
2024-07-05T23:16:53
https://dev.to/romanus_onyekwere/hosting-static-website-on-azure-blob-storage-n9j
microsoft, azure, vscode, website
A static website is an already developed or written code which is done using (in this case Visual Studio Code (VSCode) ) There is no authorization or authentication to this static website. Everything is fixed unless one decides to change the codes A static website is prepared by professional website developers and uses variety of syntax and codes that will be understood by the browser when launched. There is also another functional attribute known as CSS, which works with every website. Its function is to add colour and beauty to the website. A static website enables us to edit some elements and we can also customise. Of important note is the root folder, which enables us to effectively run and launch the website. **Steps** - Download the latest version of vs code and install it. - A link to a sample website will be given for download. - The sample website folder is compressed and unzipped with seven zips and/or files are extracted. - You will see the root folder which is in the form of index.html and 404.html ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gsrnb71xnthvz96pejpm.png) Using file explorer makes it very easy to open the root folder(index.html and 404.html) - Open the vs code - Click on file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ix9x457l3s1t1g76egzh.png) - Select a folder. - Locate the folder on your desktop download, click on it to select and upload to vs code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmx7lsa0tkwjvf483zfj.png) - Click index.html which has the code as seen - You can edit some elements especially those on white characters - You can edit elements as they appear in the lines 7,46,91,109 - Be careful not to change the code, so the browser will not be scattered Note index.html is a home page - Save ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnp3nectt0opudh1kr4q.png) - The new edited index.html page will look like ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gql7cyjvg7d8sqd1dca8.png) - Open the 404.html - This is a folder which has a polite error message - Edit some elements here - Save ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3solya88cm8rnrkutab.png) - The new edited 404.html page will look like below as you click works on home page ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98i5ja1lo6dnsp7yjamm.png) **Creating new storage account** - Log in to **[Azure portal](portal.azure.com)** - Click + Create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zbptibufesv4dmsew6mq.png) - From the **Basic tab** under<u> _project details_</u> - Make sure the right subscription is selected - Create a new resource group (websiteRG) - Under <u>_Instant details_</u> - Choose a unique storage account name - Region is East(US) - Performance is standard - Redundancy is Geo-redundant storage - Click next ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iu13wg0hw7u3jvcpvb6n.png) - Advanced - Networking - Data protection - Encryption - Tags - The above is left at default - Click <u>_Review + Create_</u> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6i10y3wfx786li16cj7m.png) - Deployment completed - Click on <u>_Go to resource_</u> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxmj0onenfrt8bt5vrtz.png) **Locating the static website** - Done in _<u>two ways</u>_ - From the **_<u>Overview page</u>_** - Click Capabilities - Locate the static website ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnfi5ft59gdtcqit0i6v.png) - Still on the _<u>Overview page</u>_ - Scrow down - Click data management - Click static website - Click enable - Index document name - Input the root folder (_index.html_) - Error document path - _404.html_ - Save - Azure generates primary and secondary endpoints automatically ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onuvpq97loz1n59e39t2.png) - Copy the link to the primary endpoint - Paste it on a browser and Enter - Error message displayed because we have not uploaded our documents which are still in a local folder ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vc7z849bvxo94px6zkl4.png) - Azure created a new storage account to host our static website ($web) - Still on the Overview page - Scrow to data storage - Click Container - Click the $web ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anloqu91niem1mtamugb.png) **Uploading Blob** - Click upload - Browse for file - Locate the file on the computer ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/761vqpdq95u6aprkj4gq.png) - Highlight the files - Drag and drop for assessment on the blob ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfv5tniux632mwv4dto8.png) - Click Upload - The folder is ready on the container and can be accessed ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3abehlj9ztwhe2h28e80.png) **To access the static website** - From the overview - Scrow to Data management - Click static website - Copy the link on the primary endpoint - Paste on a Browser ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z11rkkypq3cv1uxzwyt0.png) See the Static website below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckr1mg7n5sfyns43zrbr.png)
romanus_onyekwere
1,913,228
Innovate! Don't Waste Your Skills. Think Like Wright Brothers!
*Prioritizing engineering reasoning and critical thinking, akin to the Wright Brothers’ innovative...
0
2024-07-05T23:08:01
https://dev.to/ameet/innovate-dont-waste-your-skills-think-like-wright-brothers-52o3
innovation, ai, genai, aws
**Prioritizing engineering reasoning and critical thinking, akin to the Wright Brothers’ innovative mindset, is crucial for executives and technical managers to drive company-wide innovation. These skills enable the anticipation of trends, the solving of complex problems, and the fostering of a creative, problem-solving culture. By embedding these principles at the core of a company’s ethos, it not only leads to groundbreaking advancements but also secures a competitive market position. **
ameet
1,913,231
Beyond Algorithms: The Soul of Human Art in the Age of AI
With the release of Sora and Dall-E by OpenAI, Veo by Google, and other powerful AI models generating...
0
2024-07-05T23:06:17
https://dev.to/aisquare/beyond-algorithms-the-soul-of-human-art-in-the-age-of-ai-42c7
ai, aiart
With the release of Sora and Dall-E by OpenAI, Veo by Google, and other powerful AI models generating a plethora of AI-generated art, a new portal of endless possibilities has opened up — one where digital art is just a prompt away. This is a fascinating world to consider, where movies like Interstellar could be created with just a few steps of prompt engineering. But is there something that separates the code from the canvas? The algorithms from the art? The data from design? ##How do artists and AI learn art? Both AI and humans learn about art by observing it — the difference being that AI is trained on vast amounts of data, while humans are equipped with eyes and a brain to appreciate and absorb the beauty of nature and other art forms. This raises the question — is AI art also “art” as we define it? Humans also draw inspiration from existing art throughout their lifetimes, similar to what an AI model does. ##But what is creativity? In simple terms, isn’t creativity just iterating over a few concepts, mixing them up, and coming up with something that hasn’t been seen before — yet originates from previously known ideas? In this way, we can argue that both AI and humans continuously iterate on ideas until they find the best one, which can be labeled as creative and unique. What’s the difference then? The major difference between AI art and human art is the intent and the emotion behind it (for now). Humans infuse emotions into their art, heavily influenced by the feelings they experience through interactions with others. It conveys what they feel, what they want to express, or the emotions it evokes in others. > “ART IS HOW WE DECORATE SPACE, MUSIC IS HOW WE DECORATE TIME.” — JEAN-MICHEL BASQUIAT With this definition of art and music, it is fair to say that for something to resonate with us and be part of our space and worth our time, it must connect with us on a personal level. Humans have the ability to tailor-make that — that’s what makes human art unique. Another difference lies in how humans learn — it’s not just about observing art, but also interpreting it, influenced by factors like culture, thought processes, and various other qualities. This makes the learning process unique to each individual, which isn’t the case with AI. AI models learn by analyzing previous data and iterating on different parameters to generate the output. One more major difference is imperfection. Human art is often imperfect and unpredictable. The artist’s thoughts and progress can be seen in their work, whereas AI rarely changes its learning or output methods. ##What’s next? With advancements in AI, we might see more sophisticated and emotionally resonant art pieces. Could AI someday develop its own unique style? How might AI tools evolve to assist artists in ways we can’t yet imagine? AI can also create interactive art experiences that respond to the audience’s inputs in real time. This shows us a new dimension where art responds to the viewer and is unique to each viewer. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gejql0vdtcmojavyhzl8.png) ABOUT AISQUARE AISquare is an innovative platform designed to gamify the learning process for developers. Leveraging an advanced AI system, AISquare generates and provides access to millions, potentially billions, of questions across multiple domains. By incorporating elements of competition and skill recognition, AISquare not only makes learning engaging but also helps developers demonstrate their expertise in a measurable way. The platform is backed by the Dynamic Coalition on Gaming for Purpose (DC-G4P), affiliated with the UN’s Internet Governance Forum, which actively works on gamifying learning and exploring the potential uses of gaming across various sectors. Together, AISquare and DC-G4P are dedicated to creating games with a purpose, driving continuous growth and development in the tech industry. You can reach us at [LinkedIn](https://www.linkedin.com/groups/14431174/)), [X](https://x.com/AISquareAI), [Instagram](https://www.instagram.com/aisquarecommunity/), [Discord](https://discord.com/invite/8tJ3aCDYur). Author — Reyansh Gupta
aisquare
1,913,229
Automating Linux User Creation with Bash Script
In today's fast-paced technology environment, efficiency and automation are key. Automating tasks...
0
2024-07-05T22:55:28
https://dev.to/anthony_obotidem_4e5d7748/automating-linux-user-creation-with-bash-script-3lgb
In today's fast-paced technology environment, efficiency and automation are key. Automating tasks with a Bash script can save a significant amount of time and reduce errors. In this technical report, we will walk through the process of creating a Bash script to automate user and group creation, setting up home directories, and managing permissions and passwords. **Project Overview** Your company has recently hired several new developers, and you need to create user accounts and groups for them. To streamline this process, we will write a Bash script called create_users.sh. This script will; **Read a text file containing usernames and group names,** 1. Create users and groups as specified, 2. Set up home directories, Generate random passwords, and 3. Log all actions to /var/log/user_management.log and store the generated passwords securely in /var/secure/user_passwords.txt. 4. We can create the Bash script called "create_users.sh" with this command; Bash script Implementation steps Let's walk through the script step-by-step to understand its functionality. Checking root privileges; This line specifies that the script should be executed with the Bash shell. shabang The script checks if it is being run as root. If not, it prompts the user to run the script with root privileges and exits. Root privileges Image description Checking for User Data File; The script checks if the filename (user-data-file) is provided as an argument. If not, it displays the correct usage and exits. user data file Image description **Initializing Variables and Creating Directories;** The script creates the necessary directories and sets appropriate permissions to ensure security. Here, The 'user_data_file' stores the filename provided as an argument. Additionally 'log_file' and 'password_file' store the paths for logging actions and storing passwords. Initialize variables Image description Generating Random Passwords: A function to generate random passwords using openssl. Random password **Image description** Reading User Data File and Creating Users; The script reads the user data file line by line. For each line, it: . Trims any leading or trailing whitespaces from the username and groups. . Checks if the user already exists. If so, it logs the information and moves to the next user. . Creates the user and assigns them a personal group. creating users Image description Adding Users to Additional Groups; If additional groups are specified, the script adds the user to these groups, creating the groups if they do not exist. Adding users Image description Setting Home Directory Permissions; The script sets appropriate permissions for the user's home directory. Directory permission Image description Generating and Storing Passwords; It generates a random password, sets it for the user, and stores it in the password file. Store passwords Image description Logging Actions; Finally, the script logs all actions and completes the user creation process. Logging actions Image description Running the script; Create the txt file containing the users and the groups; The user accounts' structure is contained in this text file. Save and close the file. txt file. Image description Every line in the file identifies a user along with the groups (such "admin" or "finance") to which they are assigned. The semicolon divides the groups and users. users.txt has the structure: user datafile. Image description Ensure the script is executable; Execute script Image description Run script; Run script Image description Verify the results Check the log file for actions performed; Image description Verify the user passwords file; Image description 3.Ensure the new users and groups are created correctly; Image description Conclusion This script automates the creation of users and groups, ensuring a streamlined onboarding process. This article is a stage two task in the DevOps of HNG internship. For more information about the HNG Internship and how it can benefit your organization, visit [HNG Internship](http//hng.tech/internship) and [HNG Hire](http//hng.tech/hire.com). By using this tutorial, you can make your organization's user management procedure more efficient and ensure that new developers are onboarded promptly Wishing you the best as you continue your Tech journey
anthony_obotidem_4e5d7748
1,913,227
Documenting APIs in Ruby on Rails using Swagger
Hello there! Welcome to the world of API documentation. Today, we're going to explore how to make...
0
2024-07-05T22:30:55
https://dev.to/abdullah_saleh_7b00752404/documenting-apis-in-ruby-on-rails-using-swagger-28gp
Hello there! Welcome to the world of API documentation. Today, we're going to explore how to make your APIs more accessible and understandable using Swagger in a Ruby on Rails environment. Let's dive in! ## Understanding APIs and Swagger Imagine you've created a fantastic system, but you need a way for others to interact with it easily. That's where APIs come in. They're like helpful assistants that take requests and return the right information. But here's the challenge: How do you tell others exactly how to talk to your API? That's where Swagger enters the scene. Think of Swagger as a friendly translator. It helps explain your API in a way that both humans and computers can understand easily. ## Why Swagger is Awesome 1. Clear Documentation: Swagger shows all available endpoints, parameters, and responses. 2. Always Up-to-Date: When you change your API, Swagger documentation updates automatically. 3. Try It Out: Developers can test API calls directly from the documentation. 4. Happy Developers: Clear documentation means fewer misunderstandings and easier integration. ## Setting Up Swagger in Your Rails Project Let's walk through how to set up Swagger in your Ruby on Rails project. Here's a step-by-step guide: 1. Install the Swagger Gem: Add this to your Gemfile: ```ruby gem 'swagger-docs' ``` Then run: ``` bundle install ``` 2. Configure Swagger: Create a file `config/initializers/swagger_docs.rb`: ```ruby Swagger::Docs::Config.register_apis({ "1.0" => { :api_extension_type => :json, :api_file_path => "public", :base_path => "http://api.yourdomain.com", :clean_directory => false, :attributes => { :info => { "title" => "Your API Title", "description" => "API description", "contact" => "[email protected]" } } } }) ``` 3. Document a Controller: Here's an example of how to document a controller: ```ruby class ItemsController < ApplicationController swagger_controller :items, "Item Management" swagger_api :index do summary "Retrieves all items" notes "This lists all available items" response :ok, "Success", :Item response :not_found, "No items available" end def index render json: Item.all end end ``` 4. Define a Model: Document your model like this: ```ruby class Item < ApplicationRecord swagger_schema :Item do property :id, :integer property :name, :string property :description, :string end end ``` 5. Generate Documentation: Run this command: ``` rake swagger:docs ``` 6. Use the Documentation: The generated `api-docs.json` in your public folder is now ready to be used with Swagger UI. ## How It All Works Together Now, when a developer wants to use your API, they can easily see all available endpoints. For example, if they want to get all items, they know they need to make a GET request to `/items`. ## Keeping Documentation Updated Whenever you make changes to your API, just update your controller and model documentation, run the rake task again, and your API documentation is instantly updated. ## Why This Approach Is Effective 1. Clear Communication: Developers always know how to use your API correctly. 2. Efficiency: Reduces misunderstandings and incorrect API usage. 3. Flexibility: Easy to update and maintain as your API evolves. 4. Developer-Friendly: Interactive documentation makes testing and integration straightforward. ## Conclusion By using Swagger in your Ruby on Rails project, you've created a clear, easy-to-understand guide for your API. It's like having a helpful assistant that always knows exactly how your API works and can explain it to anyone who needs to use it. Remember, good documentation is key to making your API accessible and user-friendly. With Swagger, you're not just building an API; you're creating a smooth experience for every developer who uses it. Happy coding, and may your APIs always be well-documented and easy to use!
abdullah_saleh_7b00752404
1,913,226
[Game of Purpose] Day 48
Today I was did not manage to do anything, because my female dragged me to be bed.
27,434
2024-07-05T22:22:30
https://dev.to/humberd/game-of-purpose-day-48-4inn
gamedev
Today I was did not manage to do anything, because my female dragged me to be bed.
humberd
1,913,221
Hosting a static website on Azure Blob Storage
Azure Storage is a cloud-based storage service provided by Microsoft Azure, designed to store a vast...
0
2024-07-05T22:16:53
https://dev.to/abidemi/hosting-a-static-website-on-azure-blob-storage-5bk6
azure, container, cloudcomputing, learning
Azure Storage is a cloud-based storage service provided by Microsoft Azure, designed to store a vast range of data types in a scalable, secure, and cost-effective manner. Azure Blob Storage is a service for storing large amounts of unstructured data such as text, images, videos, and backup files. Blob Storage is often used to host websites, distribute media files, and store backup data. So how do you host a static website on an Azure Blob storage? Here’s a step-by-step guide to help you get started: 1. Create an Azure Account If you don’t already have an Azure account, sign up at portal.azure.com 2. Prepare Your Website Files Ensure you have all the static files (html, git, font, JavaScript, images, etc.) ready in a local folder. **3. Create an Azure Storage Account** Azure Storage provides a scalable and secure way to host static websites. Go to the Azure Portal: Log in at Azure Portal. Create a Storage Account: Click on the “Create a resource” button. Search for “Storage Account” and select it. Click “Create”. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o3y82nitxweoj04g22dw.png) **4. Configure the Storage Account:** Subscription: Choose your subscription. Resource group: Select an existing group or create a new one. Storage account name: Enter a unique name for your storage account. Region: Choose a region close to your users. Performance: Choose Standard. Replication: Choose the replication option that suits your needs (e.g., LRS, GRS). Click “Review + Create” and then “Create” after validation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjnmpa6kweogrxes14wb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ypj7ozkdpmjufd5fqnnz.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ndrip90p8ynu25v0ent.png) When deployment is done, click on go to resources. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi7qnssul6ruhtehnilq.png) **5. Navigate to Static Website** - On Data Management dropdown - Click on Static Website ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dl4w53ticext52guqz8f.png) **6. Configure the Static Website** Enable Static Website Enter index document name Enter error document path Then save ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vyb1rzpc013gk21y7ka.png) Azure will create 2 links to host the static website. The primary and secondary endpoint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09dbj30g4l8fjls46gzm.png) **7. Upload Your Website Files** Go back to the Storage Account: Then navigate to the “Containers” section under “Data storage”. You’ll see a new container called $web (created when you enabled static website hosting). Upload Files: Click on the `$web ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpl925dfjr64so1s1tcj.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9og318zzpp3tb6ffs8n4.png) **8. Upload files from the PC** click on Upload. Go to where the website folder is located on the computer. Highlight all the files in the folder. Drag and Drop the files from the location to the provided box. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5kr7d76fna109izn7uvn.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prdigiqfoh3mqrdb5oj8.png) It will upload the blob storage, it might take a while to upload all the files. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqwuphh4093iujb0e3xo.png) **9. Testing on a browser** When your files uploads successfully, then paste your primary end point on your browser. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j7attzppg1w513i64azl.png) I hope this step-by-step guide helped you host a static website on Azure Storage. Thank you.
abidemi
1,913,223
Forget Resumes! This AI Hiring Tool Predicts Your Next Superstar Employee with 99% Accuracy
In today's fast-paced business world, finding the right talent quickly and efficiently is more...
0
2024-07-05T22:16:37
https://dev.to/dheemanth_reddy/forget-resumes-this-ai-hiring-tool-predicts-your-next-superstar-employee-with-99-accuracy-113b
In today's fast-paced business world, finding the right talent quickly and efficiently is more crucial than ever. Traditional hiring methods often fall short, leaving HR professionals and hiring managers overwhelmed and frustrated. But what if there was a way to streamline your recruitment process, reduce costs, and identify top talent with unprecedented accuracy? Enter Invue – the cutting-edge AI-powered interviewing solution that's changing the game in talent acquisition. Harness the Power of AI for Smarter Hiring Invue brings the future of recruitment to your fingertips, offering a suite of innovative features designed to transform your hiring process: 1. Real-Time, Conversational AI Interviews Gone are the days of rigid, impersonal interviews. Invue's state-of-the-art conversational ai interviewer interface creates a dynamic, engaging experience for candidates. Imagine Sarah, a talented software developer, participating in an ai interview that adapts to her responses in real-time. The natural flow of conversation puts her at ease, allowing her true potential and communication skills to shine through. 2. Structured Interview Plans to Reduce Bias Unconscious bias can significantly impact hiring decisions. Invue tackles this challenge head-on with AI-driven structured interview tools and plans. These ensure that every candidate, regardless of background, is evaluated fairly and consistently using explainable AI. For small business owner Mark, this means he can trust that his talent acquisition process is equitable and compliant, even without a large HR department, promoting diversity hiring and bias reduction. 3. AI-Powered Asynchronous Interviews Time zones and scheduling conflicts are no longer obstacles. With Invue AI's asynchronous ai video interviewing, candidates can showcase their skills at their convenience, and you can review their responses when it suits you best. This flexibility is a game-changer for busy recruiter Emily, who can now efficiently screen candidates outside of traditional office hours using this smart hiring solution. 4. Comprehensive Hiring Signals and Interview Scores Make data-driven decisions with confidence. Invue AI provides clear, actionable insights through AI-generated hiring signals and job matching scores. HR manager Tom can quickly identify whether to hire, hold, or reject candidates based on detailed performance metrics like behavioral competencies and psychometric evaluation, streamlining his decision-making process. 5. Skill-Specific Interviewers and AI Reports One size doesn't fit all in hiring. Invue AI offers customizable ai interview software tailored to specific skills and roles. Receive in-depth AI reports that analyze both technical prowess through job skills tests and soft skills like personality traits, ensuring you have a holistic view of each candidate. For tech recruiter Lisa, this means she can confidently assess both coding skills and cultural fit in one seamless candidate assessment process. The Invue AI Advantage: Transforming Recruitment Metrics The benefits of incorporating this interview ai tool into your hiring process are clear and compelling: 85% Reduction in Recruitment Costs: Slash your hiring expenses by automating the initial screening process with ai interview practice. Imagine reinvesting those savings into employee development or other critical business areas. Over 90% Interview Completion Rate: With a user-friendly and flexible ai mock interview format, candidate participation skyrockets. Say goodbye to no-shows and scheduling headaches. Reduced Anxiety and Improved Performance: Create a stress-free ai interview prep environment that allows candidates to showcase their true potential. This leads to more accurate assessments, better candidate experience, and improved hiring decisions. Real-World Impact: A Case Study Consider the story of TechInnovate, a rapidly growing startup that struggled with their hiring process. Long interview cycles, inconsistent evaluations, and high recruitment costs were hindering their growth. After implementing Invue AI's ai for job interviews solution: Their time-to-hire decreased by 60% Recruitment costs were cut by 75% Employee retention rates improved by 30% due to better-fit hires The HR team at TechInnovate now spends less time on administrative tasks and more time on strategic initiatives, driving the company's success forward. Embrace the Future of Hiring Today In a world where talent is the ultimate competitive advantage, can you afford to rely on outdated hiring methods? Invue AI offers a seamless blend of cutting-edge technology like automated chat interviews and human-centric design, empowering you to make smarter, faster, and fairer hiring decisions through transparent hiring. Don't let top talent slip through your fingers. Experience the Invue AI difference for yourself with free ai interview practice. Try Invue AI Now Revolutionize your hiring process, reduce costs, and unlock the potential of AI-powered interview software. Your next star employee is just an AI interview away! Invue AI: Redefining the art of hiring, one interview at a time.
dheemanth_reddy
1,896,373
Reverse Thinking and Sequential Thinking: A Comparison in Setting Life Goals (Bite-size Article)
Introduction Do you currently have something you are working on long-term and aiming to...
0
2024-07-05T22:16:26
https://dev.to/koshirok096/reverse-thinking-and-sequential-thinking-a-comparison-in-setting-life-goals-bite-size-article-1616
#Introduction Do you currently have something you are working on long-term and aiming to achieve? There are various approaches to setting life goals. Among them, "**reverse thinking**" and "**sequential thinking**" are very popular and well-known methodologies for achieving goals. Personally, I tend to lean towards reverse thinking, but after various experiences in my twenties, I now primarily base my actions on reverse thinking while occasionally incorporating sequential thinking. In this article, I will compare these two thinking methods and explain how they can be useful in setting life goals. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyfh5wlqo40gbzcpveop.png) #Reverse Thinking: Placing Pieces While Looking at the Completed Puzzle **Reverse thinking** is like <u>looking at the picture of a completed puzzle and figuring out where to place each piece</u>. Since the ultimate goal (the completed picture) is clearly visible, it becomes easy to determine which pieces (steps) should be placed and in what order to achieve the goal. - Goal: The completed puzzle picture - Action: Placing the puzzle pieces ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vmesnhtf66ssaawmia8.png) --- The advantage of this method is that <u>you can always keep the big picture in mind</u> as you progress, making it easier to take specific and planned actions. It’s easy to set realistic schedules and timelines, and clearly manage task prioritization. However, <u>long-term plans are vulnerable to changes in the environment and circumstances</u>. What you envisioned a year ago might not remain the same today. In the real world, the picture of the completed puzzle often changes over time. When this happens, you may be forced to change or abandon your plans. While the term "reverse thinking" sounds solid and low-risk, it can actually become risky if you cannot adapt to changes. ##Advantages - Always keeping the big picture in mind, making specific and planned actions easier. - Easy to set realistic schedules and timelines. - Clearly managing task prioritization. ##Disadvantages - Long-term plans are vulnerable to changes in the environment and circumstances. - The goal may change over time, forcing plan changes or abandonment. - While reverse thinking seems solid and low-risk, it can become risky if you cannot adapt to changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqlqa6quiypb46ben3xt.png) #Sequential Thinking: Drawing a Map While Exploring **Sequential thinking** is like <u>exploring unknown land while drawing a map</u>. Starting from your current location, you observe the paths and scenery in front of you and proceed step by step. As you advance, the map (the path towards the goal) gradually becomes clearer. - Goal: Focus on the immediate task (proceeding along the path) - Action: Determine the next step based on what you see in front of you (deciding the next action based on the current situation) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3sj29zybqa8nus7n4et.png) --- The advantage of this method is that <u>it allows you to always take actions that are grounded in reality</u>. You can remain agile and flexible, adjusting your direction as circumstances change. Additionally, serendipity (fortunate accidents) is more likely to occur. The disadvantage is that <u>there is no plan to reap the benefits of long-term strategic planning</u>. The place you are exploring might be a barren island with cliffs, and there would be no point in exploring such a place for a long time. Of course, it might not be the case, but you won't know until you try. ##Advantages - Easy to take actions grounded in reality. - Always remain agile and flexible, adjusting direction as circumstances change. - Serendipity (fortunate accidents) is more likely to occur. ##Disadvantages - It is difficult to plan and proceed with long-term strategies. - Current actions may not always lead to optimal long-term benefits, risking wasted time and resources. - There is a high possibility of proceeding haphazardly, lacking consistency in reaching a major goal. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6xv2o32tce4xemy86la.png) #Maintaining Motivation Reverse thinking and sequential thinking differ significantly in how they maintain motivation, and I believe this is a very important aspect. ##Maintaining Motivation with Reverse Thinking Reverse thinking is fundamentally about "**maintaining motivation through clear goals**". Since you always have a clear goal in sight, <u>thinking about how far you are from that goal becomes a source of motivation</u>. For example, imagine you work in sales and have 1,000 pairs of shoes in stock. After six months of hard work in sales and operations, the stock has been reduced to 500 pairs. In this case, you can think that if you continue with the same effort for another six months, you will be able to sell out the remaining 500 pairs. The thought of "just six more months" can become your motivation. By concretely grasping the progress towards achieving your goal, it can make you feel relieved and motivated, making this method suitable for such situations. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/894x5o7blw5g4ti25ka0.png) ##Maintaining Motivation with Sequential Thinking Sequential thinking also has its own method for maintaining motivation. Sequential thinking is characterized by "**maintaining motivation through the accumulation of small successes**". By clearing the tasks and steps in front of you one by one, you can gradually feel that you are getting closer to your goal. For example, imagine you continue running for 30 minutes every day. Even if you could only run a short distance at first, by noticing that you can run a little further each day, you can gain a sense of achievement. In this way, each small goal you achieve provides a success experience that motivates your next action, allowing you to continually move towards your ultimate goal. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kz74kjnxy74abaf68i3p.png) Unlike reverse thinking, <u>not having a long-term goal can lower the barriers</u>, which is a strength of this method. ## Comparison Finally, let's briefly summarize and compare the content discussed so far. | **Feature** | **Reverse Thinking** | **Sequential Thinking** | |------------------|---------------------------------------------------------------------------|-------------------------------------------------------| | **Planning Method** | Determine steps by working backwards from the goal | Determine steps starting from the current situation | | **Advantages** | Specific and realistic planning, clear prioritization | Flexibility, sense of progress, adaptability | | **Applications** | Achieving long-term, specific goals | Continuous growth and adaptation | | **Motivation** | Motivation maintained by clear goals | Motivation maintained by accumulating small successes | | **Risks** | High risk if plans don't proceed as expected | Risk of unclear goals | ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqe0avdl5l3p6bxo9om6.png) ## Conclusion Which method do you use more often? In my opinion, both methods have their own characteristics, and neither is necessarily better than the other. The smart approach is to combine and switch between them depending on the situation. I hope this article was helpful in some way! Thank you for reading.
koshirok096
1,912,050
Secure File Sharing with Azure Storage and Encryption
This blog post will guide you through creating secure shared storage for your application in...
0
2024-07-05T22:15:28
https://dev.to/jimiog/secure-file-sharing-with-azure-storage-and-encryption-120
azure, microsoft, cloud, devops
This blog post will guide you through creating secure shared storage for your application in Microsoft Azure. We'll cover storage account creation, access control with managed identities, and data encryption using Azure Key Vault. **Creating a Secure Storage Account** 1. **Search** for "Storage Account" in Azure and create one with your desired name and resource group. ![Searching for Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ifg9o2i3z85k7o126w1.jpg) ![Going to Encryption](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4dnrsgn5bg6pgc1v3fv.jpg) 2. **Enable Infrastructure Encryption** for added security at rest. ![Enabling the Encryption](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l3tcxw7c8a94snklbhdp.jpg) **Adding Managed Identity for Access Control** 1. Search for "Managed Identities" and create one within your resource group. ![Creating managed identity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/etkpf65thuhs4k799tmj.jpg) ![Configuring the Managed Identity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cngestpn531b2iec6fdr.jpg) 2. Go to your storage account's **Access Control (IAM)** settings. ![Locating the IAM Access Control](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0eavqyjqpv62d5jeuur.jpg) 3. Assign the **Storage Blob Data Reader** role to the managed identity you created. ![Adding a role assignment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vdztjccf3k1525wp5fa.jpg) ![Search for storage blob data reader](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nb13ljk4figh4a7odfyi.jpg) ![Assignning to Managed identity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lq091m2auurmfncoplrh.jpg) ![Searching for the managed identity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjq1e3dnqtf8ophld71v.jpg) **Securing Storage with Key Vault and Key** 1. Ensure you have **Key Vault Administrator** permissions. Assign this role to your user account. ![Navigating back to the storage account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mevfuffhm38c5fr8odyk.jpg) ![Locating IAM Control again](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwbutvx0y8jf75whmswj.jpg) ![Adding role assignment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nbpq19ivmcecmadziqa.jpg) ![Adding key vault admin role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89xuticf2bsk5kjp76gm.jpg) ![Adding role assignment to your user](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywz9xarv0v21vvvlhidp.jpg) 2. Search for "Key Vaults" and create one with a name and resource group. ![Creating Key Vault](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vybw1zvfrfmtkzss0abn.jpg) ![Configuring Key Vault](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dzxo7vuougnyklz2klvs.jpg) ![Changing the Access Configuration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b27cvwmseaos84mf4l1g.jpg) 3. Enable **Soft delete** and **Purge protection** for additional security. ![Checking soft-delete and purge protection enabled](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3u8pcd94hj7z5m9ld8eg.jpg) 4. Generate a new key within the Key Vault. ![Generating the key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4po9vcl3q61f32fkjg4w.jpg) ![Creating the key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvs5ruqd2zmjir4w3loc.jpg) **Configuring Storage Account to Use Key Vault Key** 1. In your resource group's IAM settings, assign the **Key Vault Crypto Service Encryption User** role to your managed identity. ![Searching for the key vault crypto role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mi3pcc3yh8u9wz6yhfic.jpg) ![Assigning the role to your identity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqru1avomolagb3fptvn.jpg) 2. Go to your storage account's **Encryption** settings and configure it to use the customer-managed key from your Key Vault. ![Locating Encryption in the Storage Account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nglte16hgua95agvlo4i.jpg) ![Configuring the Encryption](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jydqdv9vs2g4b0adpz2.jpg) 3. Select the managed identity you created to give it access to the key. ![Selecting the key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zr01l5ynrda5w4qajul.jpg) **Setting Retention Policy and Encryption Scope** 1. Create a container named "hold" within your storage account. ![Locating Container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsy44qy79h991ucpl1r6.jpg) 2. Upload a file to the container. ![Uploading a file to the container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzgzavathkgp0td7amss.jpg) 3. Set a **time-based retention policy** on the container to prevent accidental deletion for a specified period (e.g., 5 days). ![Locating Access Policy of the Container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gx2ozqx2co2ha94of00w.jpg) ![Creatign an imutable policy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8npyhr9p1hmtj9dqjzvv.jpg) 4. Create an **encryption scope** within your storage account for additional infrastructure-level encryption. ![Creating the encryption scope](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fo73kkmkoj7k9oh2trx8.jpg) **Conclusion** By following these steps, you've created a secure shared storage solution in Azure. You've leveraged managed identities for access control, secured data with Azure Key Vault, and implemented retention policies and encryption scopes for enhanced protection. Remember to clean up your resources after following this guide in a non-production environment.
jimiog
1,906,805
🎲 Criando um Dado com Flexbox e CSS Grid Layout
Quando nos deparamos com uma necessidade de posicionamento usando CSS logo nos perguntamos - Será que...
0
2024-07-05T22:14:37
https://dev.to/maiquitome/criando-um-dado-com-flexbox-e-css-grid-layout-3pl1
braziliandevs, css, flexbox, grid
Quando nos deparamos com uma necessidade de posicionamento usando CSS logo nos perguntamos - Será que eu uso Flexbox ou CSS Grid Layout? Qual a melhor abordagem para esse caso? Neste artigo vamos trazer três abordagens diferentes para posicionar os pontos das seis faces de um dado. A primeira abordagem será com Flexbox e, as outras duas, com CSS Grid Layout. [🔗 Acesse aqui o código no Github](https://github.com/maiquitome/creating-dice-using-css-grid-and-flexbox.git) [![A Jornada do Autodidata em Inglês](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkoqkmh9hic17tnddjf6.png)](https://go.hotmart.com/R93865620U) * [Criando o arquivo `main.css`](#criando-o-arquivo-main-css) * [Exemplo com Flexbox](#exemplo-com-flexbox) - [Criando a primeira face](#criando-a-primeira-face-flexbox) - [Criando a segunda face](#criando-a-segunda-face-flexbox) - [Criando a terceira face](#criando-a-terceira-face-flexbox) - [Criando a quarta face](#criando-a-quarta-face-flexbox) - [Criando a quinta face](#criando-a-quinta-face-flexbox) - [Criando a sexta face](#criando-a-sexta-face-flexbox) - [Centralizando tudo na tela](#centralizando-tudo-na-tela) * [Exemplo com Grid Template Areas](#exemplo-com-grid-template-areas) - [Posicionando os Pontos](#posicionando-os-pontos) - [Usando `grid-template-areas`](#usando-grid-template-areas) - [Usando `grid-area`](#usando-grid-area) - [Centralizando os pontos nas células](#centralizando-os-pontos-nas-celulas) - [Mas ainda estou enxergando um problema](#mas-ainda-estou-exergando-um-problema) * [Exemplo com Grid Rows e Grid Columns](#exemplo-com-grid-rows-e-grid-columns) - [Entendendo como funciona: `grid-row` e `grid-column`](#entendendo-como-funciona-grid-row-e-grid-column) - [Posicionando os pontos](#posicionando-os-pontos-com-grid-row-e-grid-column) * [Considerações finais](#considerações-finais) {% youtube https://youtu.be/_rdeC6r-bRA %} ## Criando o arquivo `main.css` <a name="criando-o-arquivo-main-css"> Na raiz do projeto crie o arquivo `main.css`. Nele vamos colocar os estilos que servirão tanto para o exemplo do `flexbox` quanto para os exemplos com `CSS Grid Layout`. Vamos construir o quadrado que servirá para todas as faces do dado. Para isso, selecionaremos todos os elementos `div` com o atributo `class` cujo valor termina com a string `face`. E com a classe `.pip` vamos construir cada ponto. Em `main.css` adicione: ```css * { margin: 0; } body { background-color: #1b2231; } div[class$="face"] { width: 104px; height: 104px; background-color: #54c59f; border-radius: 10%; margin: 16px; } .pip { width: 24px; height: 24px; background-color: #080a16; border-radius: 50%; margin: 4px; } ``` ------------------------------------------------------------------- ## Exemplo com Flexbox <a name="exemplo-com-flexbox"> Vamos começar criando o arquivo HTML. Crie um diretório chamado `flexbox/index.html`: ![Criando os diretórios](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89ve6ip9s6b3rbnrdmm6.png) E vamos importar o `main.css`. No arquivo `flexbox/index.html` adicione: ```html <head> <link rel="stylesheet" href="../main.css"> </head> ``` ### Criando a primeira face (Flexbox) <a name="criando-a-primeira-face-flexbox"> No arquivo `flexbox/index.html`: ```html <head> <link rel="stylesheet" href="../main.css"> </head> <!-- adicione: --> <body> <div class="first-face"> <div class="pip"></div> </div> </body> ``` O resultado será um **quadrado verde** com uma **cor escura de background** e um **círculo escuro** na parte superior do quadrado: ![Quadrado verde com um ponto](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bma53xude1nq2gom1r2l.png) #### Centralizando o ponto No arquivo `flexbox/index.html` adicione: ```html <head> <link rel="stylesheet" href="../main.css"> <!-- adicione: --> <link rel="stylesheet" href="style.css"> </head> <body> <div class="first-face"> <div class="pip"></div> </div> </body> ``` Crie o arquivo `flexbox/style.css`. A primeira etapa é informar ao navegador para tornar `.first-face` um contêiner flexbox. ```css .first-face { display: flex; } ``` Olhando não há grandes alterações, mas por debaixo dos panos com `display: flex` podemos usar as linhas `MAIN AXIS` e `CROSS AXIS`. ![MAIN AXIS VS CROSS AXIS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67att1gt746370xtii0p.png) O contêiner `first-face` agora tem um eixo principal (main axis) horizontal. O eixo principal de um contêiner flexbox que pode ser horizontal ou vertical, mas o **padrão é horizontal**. Se adicionarmos outro `pip` à `first-face`, ele aparecerá à direita do primeiro. O contêiner também tem um eixo transversal (cross axis) vertical. O eixo transversal é sempre perpendicular ao eixo principal. A propriedade `justify-content` define o alinhamento ao longo do **eixo principal (main axis)**. Como queremos centralizar o `pip` ao longo do **eixo principal**, usaremos o valor `center`. No arquivo `flexbox/style.css`: ```css .first-face { display: flex; justify-content: center; /* adicione */ } ``` ![justify-content: center](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vt4w9vung654db57v4mf.png) A propriedade `align-items` determina como os itens são dispostos ao longo do **eixo transversal (cross axis)**. Como queremos que o `pip` seja centralizado ao longo desse eixo, usaremos o valor `center` aqui também. No arquivo `flexbox/style.css`: ```css .first-face { display: flex; justify-content: center; align-items: center; /* adicione */ } ``` ![align-items: center](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5u5no1fcohd4dbfl8h1w.png) ### Criando a segunda face (Flexbox) <a name="criando-a-segunda-face-flexbox"> No arquivo `flexbox/index.html`: ```html <head> <link rel="stylesheet" href="../main.css"> <link rel="stylesheet" href="style.css"> </head> <body> <div class="first-face"> <div class="pip"></div> </div> <!-- adicione: --> <div class="second-face"> <div class="pip"></div> <div class="pip"></div> </div> </body> ``` ![Adicionando a classe "second-face" no html](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7amlt4frb1xwmrctqdca.png) No arquivo `flexbox/style.css`: ```css .first-face { display: flex; justify-content: center; align-items: center; } /* adicione */ .second-face { display: flex; } ``` ![Adicionando "display: flex" a classe "second-face"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4nt94svr1kn6xmb7euo.png) Agora os dois pontos estão um do lado do outro. Desta vez, queremos que os pontos fiquem em lados opostos do dado. Há um valor para `justify-content` que nos permitirá fazer exatamente isso: `space-between`. A propriedade `space-between` preenche uniformemente o espaço entre os itens flexíveis (flex items). Como há apenas dois pontos, isso os afasta um do outro. No arquivo `flexbox/style.css`: ```css .first-face { display: flex; justify-content: center; align-items: center; } .second-face { display: flex; justify-content: space-between; /* adicione */ } ``` ![Colocando o "space-between"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/duzlmlzeqpm4t3nabiki.png) É aqui que nos deparamos com um problema. Ao contrário de antes, não podemos definir `align-items` porque isso afetará os dois pontos. Felizmente, o flexbox inclui `align-self`. Essa propriedade nos permite alinhar itens individuais em um contêiner flexível ao longo do eixo transversal da maneira que quisermos! O valor que queremos para essa propriedade é `flex-end`. No arquivo `flexbox/style.css`: ```css .first-face { display: flex; justify-content: center; align-items: center; } .second-face { display: flex; justify-content: space-between; } /* adicione */ .second-face .pip:nth-of-type(2) { align-self: flex-end; } ``` A pseudo-classe CSS `:nth-of-type()` corresponde a um ou mais elementos de um dado tipo, baseado em sua posição entre um grupo de irmãos. ![Adicionando "align-self: flex-end"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fngc56daljzmhzgu7es.png) ### Criando a terceira face (Flexbox) <a name="criando-a-terceira-face-flexbox"> No arquivo `flexbox/index.html`: ```html <head> <link rel="stylesheet" href="../main.css"> <link rel="stylesheet" href="style.css"> </head> <body> <div class="first-face"> <div class="pip"></div> </div> <div class="second-face"> <div class="pip"></div> <div class="pip"></div> </div> <!-- adicione: --> <div class="third-face"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> </body> ``` No arquivo `flexbox/style.css` adicione: ```css .third-face { display: flex; justify-content: space-between; } .third-face .pip:nth-of-type(2) { align-self: center; } .third-face .pip:nth-of-type(3) { align-self: flex-end; } ``` ![Adicionando a classe "third-face"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lv0uv7gd15p0gr6unkdq.png) ### Criando a quarta face (Flexbox) <a name="criando-a-quarta-face-flexbox"> Nesta face do dado precisaremos de duas colunas para fazer o posicionamento usando `justify-content` tanto na horizontal com na vertical: ![quatro pontos separados por duas colunas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf3z2auj5urrne37woeb.png) Adicione no arquivo `flexbox/index.html`: ```html <div class="fourth-face"> <!-- note aqui a classe column --> <div class="column"> <div class="pip"></div> <div class="pip"></div> </div> <!-- e aqui também colocamos a classe column --> <div class="column"> <div class="pip"></div> <div class="pip"></div> </div> </div> ``` No arquivo `flexbox/style.css` adicione: ```css .fourth-face { display: flex; justify-content: space-between; } ``` ![duas colunas com dois pontos juntos cada](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13becp0ezhen53tke9ag.png) No arquivo `flexbox/style.css` vamos setar o display como flex: Ainda no arquivo `flexbox/style.css`: ```css .fourth-face { display: flex; justify-content: space-between; } /* aqui colocamos display como flex */ .fourth-face .column { display: flex; } ``` ![quatro pontos em linha](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lx829a8jgpug751iw4v8.png) Cada `column` possuí dois pontos cada. Podemos usar a propriedade `flex-direction` para definir a direção do eixo principal (main axis) como coluna. ```css .fourth-face { display: flex; justify-content: space-between; } .fourth-face .column { display: flex; flex-direction: column; /* adicione */ } ``` ![duas colunas com dois pontos juntos cada](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13becp0ezhen53tke9ag.png) As colunas agora são contêineres flexíveis. Percebeu como colocamos um contêiner flexível diretamente dentro de outro contêiner flexível? Não tem problema! O Flexbox não se importa se os contêineres estão aninhados. A etapa final é espaçar os pontos dentro das colunas, separando-os uns dos outros. Como o eixo principal (main axis) agora é vertical, podemos usar o `justify-content` novamente. ```css .fourth-face { display: flex; justify-content: space-between; } .fourth-face .column { display: flex; flex-direction: column; justify-content: space-between; /* adicione */ } ``` Agora temos perfeitamente quatro faces do dado: ![perfeitamente quatro faces do dado](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfst5oa4ypjxk1wbv8vs.png) ### Criando a quinta face (Flexbox) <a name="criando-a-quinta-face-flexbox"> A quinta face vai ser parecida com o quarta face, mas com uma `column` a mais: Adicione no arquivo `flexbox/index.html`: ```html <div class="fifth-face"> <div class="column"> <div class="pip"></div> <div class="pip"></div> </div> <div class="column"> <div class="pip"></div> </div> <div class="column"> <div class="pip"></div> <div class="pip"></div> </div> </div> ``` ![Adicionando "fifth-face" no HTML](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lobs113m9lhw9y3nhk1.png) Adicione no arquivo `flexbox/style.css`: ```css .fifth-face { display: flex; justify-content: space-between; } ``` Perceba bem as três colunas definidas: ![Adicioando `fifth-face` no arquivo css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3fffndi49nt17ptu4290.png) Vamos transformar cada coluna em flex e alterar a direção da main axis para poder usar o `justify-content` na vertical. No arquivo `flexbox/style.css`: ```css .fifth-face { display: flex; justify-content: space-between; } /* Adicione: */ .fifth-face .column { display: flex; flex-direction: column; justify-content: space-between; } ``` ![Falta centralizar o ponto do meio](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dnl25q8vrqt4ruvzvw9.png) Precisamos agora só centralizar o ponto do meio. No arquivo `flexbox/style.css`: ```css .fifth-face { display: flex; justify-content: space-between; } .fifth-face .column { display: flex; flex-direction: column; justify-content: space-between; } /* adicione: */ .fifth-face .column:nth-of-type(2) { justify-content: center; } ``` Agora temos perfeitamente cinco faces do dado, faltando apenas a sexta face: ![Falta apenas a sexta face](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fzutjy7l2l1du5d5gko.png) ### Criando a sexta face (Flexbox) <a name="criando-a-sexta-face-flexbox"> Para criar a sexta face do dado ficou fácil, apenas precisamos de duas colunas com três pontos cada. Adicione no arquivo `flexbox/style.css`: ```html <div class="sixth-face"> <div class="column"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> <div class="column"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> </div> ``` ![A sexta face apenas com HTML](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ab9ad5t5rmwxi5qvrh5c.png) Vamos reaproveitar código. Como a sexta face também possuí apenas duas colunas igual a quarta face, podemos colocar `.sixth-face` junto do código do `.fourth-face`: ```css .fourth-face, .sixth-face { display: flex; justify-content: space-between; } .fourth-face .column, .sixth-face .column { display: flex; flex-direction: column; justify-content: space-between; } ``` Agora sim, seis faces do dado completas: ![Seis faces completas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jimj6mjo9po7zenyhdws.png) ### Centralizando tudo na tela <a name="centralizando-tudo-na-tela"> As facea dos dados ficaram em coluna e, já que aprendemos como funciona o Flexbox, que tal usar no `body` também para centralizar tudo? Adicione no arquivo `main.css`: ```css body { background-color: #1b2231; display: flex; } ``` Agora temos todas as faces, uma do lado da outra: ![Todas as faces do dado - uma do lado da outra](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xoeu66edup86yp19cex.png) Temos um problema ao dominuir a tela na horizontal, os quadros ficam com tamanhos diferentes: ![quadros com tamanhos diferentes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4jkd582fuwnf2xlhljs.png) Vamos colocar um comando para quebrar a linha ao invés de alterar o tamanho dos quadrados: ```css body { background-color: #1b2231; display: flex; flex-wrap: wrap; /* adicione */ } ``` ![quebrando a linha ao invés de alterar o tamanho dos quadrados](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ga1so33garwh9wv4yln6.png) Agora vamos começar a centralizar tudo. #### Centralizando na horizontal (main axis): ```css body { background-color: #1b2231; display: flex; flex-wrap: wrap; justify-content: center; /* adicione */ } ``` ![Centralizando na horizontal (main axis)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4e5efbicoyrjjjid43t.png) #### Centralizando na vertical (cross axis): ```css body { background-color: #1b2231; display: flex; flex-wrap: wrap; justify-content: center; align-items: center; /* adicione */ } ``` ![Centralizando na vertical (cross axis)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bybjqi9md0ihx91fufkl.png) #### Juntando tudo bem no centro: ```css body { background-color: #1b2231; display: flex; flex-wrap: wrap; justify-content: center; align-items: center; align-content: center; /* adicione */ } ``` ![Usando "align-content: center"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87bdkq1oaayefwfdfne8.png) Agora podemos diminuir a tela tranquilamente que todo conteúdo permanece no centro da tela: ![todo conteúdo permanece no centro da tela](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nuc6w8u5eut5vxhgb2oj.png) ------------------------------------------------------------------- ## Exemplo com Grid Template Areas <a name="exemplo-com-grid-template-areas"> Neste exemplo vamos usar o CSS Grid Layout, já que os pontos de um dado tradicional estão alinhados em **três linhas** e **três colunas**. Imagine a face de um dado como uma grade 3x3, em que cada célula representa a posição de um ponto: ``` +---+---+---+ | a | b | c | +---+---+---+ | d | e | f | +---+---+---+ | g | h | i | +---+---+---+ ``` Para criar uma grade simples de 3 por 3 usando CSS, a única coisa que precisamos fazer é definir um elemento contêiner como `display: grid` e informar que queremos três linhas (`grid-template-rows: 1fr 1fr 1fr;`) e três colunas (`grid-template-columns: 1fr 1fr 1fr;`) de tamanhos iguais. Crie o arquivo `grid-template-areas/style.css` com o seguinte código: ```css [class$="face"] { display: grid; grid-template-rows: 1fr 1fr 1fr; grid-template-columns: 1fr 1fr 1fr; } ``` A unidade `fr` permite definir o tamanho de uma linha ou coluna como uma fração do espaço livre do contêiner da grade; no nosso caso, **queremos um terço do espaço disponível**, portanto, usamos `1fr` três vezes. Com isso já podemos perceber os pontos separados por 3 colunas, sendo colocados automaticamente em cada uma das células, da esquerda para a direita: ![Todas as faces com pontos separados por 3 colunas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r85515o9me9vwgpo5pu1.png) Podemos simplificar esse código CSS. Em vez de escrever `1fr 1fr 1fr`, podemos usar `repeat(3, 1fr)` para repetir a unidade `1fr` três vezes. Também usamos a propriedade abreviada `grid-template` que define `linhas/colunas`: ```css [class$="face"] { display: grid; grid-template: repeat(3, 1fr) / repeat(3, 1fr); } ``` É simples construir uma grade 3X3 com `grid-template`, **mas ainda não usaremos essa propriedade**. A seguir usaremos `grid-template-areas` sem o `grid-template`, assim iremos perceber que precisamos adicionar mais tarde o `grid-template` também. Crie o arquivo `grid-template-areas/index.html` com o seguinte código: ```html <head> <link rel="stylesheet" href="../main.css"> <link rel="stylesheet" href="style.css"> </head> <body> <div class="first-face"> <div class="pip"></div> </div> <div class="second-face"> <div class="pip"></div> <div class="pip"></div> </div> <div class="third-face"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> <div class="fourth-face"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> <div class="fifth-face"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> <div class="sixth-face"> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> <div class="pip"></div> </div> </body> ``` ![Faces apenas com HTML](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ws9ygg92wm16yrhixhcj.png) No código acima já podemos perceber uma diferença em relação ao exemplo anterior usando Flexbox: Não há classes `column`. Então percebemos que o arquivo HTML já está mais simples. Vamos agora partir para o arquivo CSS. Crie o arquivo `grid-template-areas/style.css`: ![imagem da estrutura de arquivos do projeto](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/584i7zqp8hwccp891tlx.png) ### Posicionando os Pontos <a name="posicionando-os-pontos"> Agora chegamos ao ponto em que precisamos posicionar os pontos (pips) para cada um dos valores de cada face do dado. Seria bom se os pontos fluíssem automaticamente para as posições corretas na grade para cada valor. Infelizmente, precisaremos definir a posição de cada um dos pontos individualmente. Lembra da tabela ASCII no início desse exemplo? Vamos criar algo muito semelhante usando CSS. **Em vez de rotular as células na ordem das linhas, usaremos essa ordem específica**, de modo que precisaremos apenas de uma quantidade mínima de CSS para corrigir os casos extremos: ``` +---+---+---+ | a | | c | +---+---+---+ | e | g | f | +---+---+---+ | d | | b | +---+---+---+ ``` ![tabelas com as letras](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bltzll5ilf21y872t7ua.png) Duas das células são deixadas vazias, pois nunca são usadas em nossos dados. #### Usando `grid-template-areas` <a name="usando-grid-template-areas"> Podemos traduzir esse layout para CSS usando a propriedade mágica `grid-template-areas` (que substitui o `grid-template` usado acima): Adicione no arquivo `grid-template-areas/style.css`: ```css [class$="face"] { display: grid; grid-template-areas: "a . c" "e g f" "d . b"; } ``` ![usando grid-template-areas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xi4d6z6j70695pk03dws.png) Portanto, em vez de usar unidades tradicionais para dimensionar nossas linhas e colunas, podemos simplesmente nos referir a cada célula com um nome. A própria sintaxe fornece uma visualização da estrutura da grade, assim como a nossa tabela ASCII. Os nomes são definidos pela propriedade `grid-area` do item da grade. Os pontos na coluna do meio significa uma célula vazia. Aparentemente, olhando a imagem acima, não houve alterações. Falta agora usarmos o `grid-area`. #### Usando `grid-area` <a name="usando-grid-area"> Usamos a propriedade `grid-area` para dar um nome a esse item da grade. O modelo de grade (acima) pode então fazer referência ao item pelo nome para colocá-lo em uma área específica da grade. A pseudo classe `:nth-of-type()` nos permite direcionar cada pip individualmente. * Posicionando a letra `b` Com o código abaixo conseguimos vizualizar onde está o ponto 2: ```css .pip:nth-of-type(2) { background-color: red; } ``` ![posição da letra `b`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5npkvyxd2uxhunelubk.png) ```css .pip:nth-of-type(2) { grid-area: b; } ``` Resultado após a letra `b` ter sido posicionada certa: ![resultado após a posição certa do `b`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kw8ofump6jjlx1385n4o.png) * Posicionando a letra `c` ![posição da letra `c`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bqv4isqoeoqudp8lv28.png) ```css .pip:nth-of-type(3) { grid-area: c; } ``` Resultado após a letra `c` ter sido posicionada certa: ![resultado após a posição certa do `c`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icniboecbu2r3od3y3p0.png) * Posicionando a letra `d` ![posição da letra `d`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/or1rbeoc05sjioe0odi3.png) ```css .pip:nth-of-type(4) { grid-area: d; } ``` Resultado após a letra `d` ter sido posicionada certa: ![resultado após a posição certa do `d`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gazyngryc7nbjqov0dhr.png) * Posicionando a letra `e` ![posição da letra `e`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ihho4tjgq5tuvxbqtu0.png) ```css .pip:nth-of-type(5) { grid-area: e; } ``` Resultado após a letra `e` ter sido posicionada certa: ![resultado após a posição certa do `e`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9flpx4oy0eorrwjo5ez1.png) * Posicionando a letra `f` ![posição da letra `f`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gg61ddjxryxze2jantas.png) ```css .pip:nth-of-type(6) { grid-area: f; } ``` Resultado após a letra `f` ter sido posicionada certa: ![resultado após a posição certa do `f`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyjnhqihqis6vr5mgmk2.png) * Posicionando a letra `g` ![posição da letra `g`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nx0wstw3hd2jhyx398eq.png) Como você pode ver, os valores 1, 3 e 5 ainda estão incorretos. Devido à ordem das áreas do modelo de grade que escolhemos anteriormente, só precisamos reposicionar o último `pip` de cada um desses dados. Para obter o resultado que desejamos, combinamos as pseudo classes `:nth-of-type(odd)` e `:last-of-type`: ```css .pip:nth-of-type(odd):last-of-type { grid-area: g; } ``` * `:nth-of-type(odd)` = todos os irmãos(ímpar) * `last-of-type` = último irmão ![explicando irmãos ímpares](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jnlgkc5dukshcnrzfpr6.png) Resultado após a letra `g` ter sido posicionada certa: ![resultado após a posição certa do `g`](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00xkzj9848jxmcsvux8v.png) #### Centralizando os pontos nas células <a name="centralizando-os-pontos-nas-celulas"> Está estranho, não está? é porque precisamos centralizar os pontos, na imagem abaixo, podemos ver os pontos alinhados a esquerda: ![Pontos alinhados a esquerda](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpt1z5exp5asxv6co17i.png) No mesmo arquivo `grid-template-areas/style.css` vamos adicionar: ```css .pip { align-self: center; justify-self: center; } ``` ![pontos centralizados na célula](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/udp18ygrb74dsruaqm14.png) #### Mas ainda estou enxergando um problema <a name="mas-ainda-estou-exergando-um-problema"> O padding em cada face está diferente!!! ![mostrando o padding diferente](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a12abek430smgo4ofyot.png) ![face com 4 pontos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihlfiw4x27k5jyrx813v.png) ![face com 5 pontos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5f9oau1cdv8mput6b1w.png) Agora vamos usar `grid-template-rows` e `grid-template-columns` para deixar as células todas do mesmo tamanho. No arquivo `grid-template-areas/style.css`: ```css [class$="face"] { display: grid; grid-template-rows: repeat(3, 1fr); /* adicione */ grid-template-columns: repeat(3, 1fr); /* adicione */ grid-template-areas: "a . c" "e g f" "d . b"; } ``` Agora sim!!! O resultado ficou perfeito!! ![resultado perfeito](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmifo8wyi6y9swbxbbhd.png) ![inspecionando a face 4 - ANTES e DEPOIS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgtdupgjwp9x0s75xca3.png) Apenas mas uma melhoria no código. Podemos fazer uma abreviação trocando duas linhas por uma. No arquivo `grid-template-areas/style.css`: ```css [class$="face"] { grid-template-rows: repeat(3, 1fr); /* remover */ grid-template-columns: repeat(3, 1fr); /* remover */ grid-template: repeat(3, 1fr) / repeat(3, 1fr); /* adicionar */ } .pip { align-self: center; /* remover */ justify-self: center; /* remover */ place-self: center; /* adicionar */ } ``` O resultado ficou perfeito mas o código ficou muito complexo. Vamos tentar usar o CSS Grid Layout de outra forma? Vamos ver isso no próximo exemplo... ------------------------------------------------------------------- ## Exemplo com Grid Rows e Grid Columns <a name="exemplo-com-grid-rows-e-grid-columns"> Neste exemplo também vamos usar o CSS Grid Layout com **três linhas** e **três colunas**, onde cada célula representa a posição de um ponto: ``` +---+---+---+ | . | . | . | +---+---+---+ | . | . | . | +---+---+---+ | . | . | . | +---+---+---+ ``` ### Entendendo como funciona: `grid-row` e `grid-column` <a name="entendendo-como-funciona-grid-row-e-grid-column"> Antes de montarmos o dado com essa solução, vamos só entender como funciona `grid-row` e `grid-column`. Essa será uma explicação mais rápida pois existe muita coisa pra aprender sobre o assunto e fugiria do escopo desse artigo. Vamos já criar os arquivos: No arquivo `grid-rows-and-grid-columns/index.html` você pode colocar esse código: ```html <head> <link rel="stylesheet" href="../main.css"> <link rel="stylesheet" href="style.css"> </head> <body> <div class="face"> <div class="square"></div> <!-- square é provisório --> </div> </body> ``` ![estrutura de pastas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovdolhk1chfs7cyjsd82.png) E no arquivo `grid-rows-and-grid-columns/style.css` você pode colocar esse código: ```css [class$="face"] { display: grid; grid-template: repeat(3, 1fr) / repeat(3, 1fr); } .square { background-color: white; grid-column-start: 1; grid-column-end: 2; } ``` O resultado será esse: ![face com um quadrado na primeira coluna](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16rqpsl39btcbi7a1h39.png) Temos um quadrado na primeira linha ocupando um espaço da coluna 1 até a coluna 2. Como não definimos uma altura (height) e uma largura (width) para o quadrado, ele acaba ocupando o espaço inteiro da célula: ![ocupando o espaço inteiro da célula](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftoi5jg43uep6qa7zy1i.png) Vizualizamos 4 linhas na vertical, então podemos especificar que queremos que o quadrado se estenda até a linha 3: ```css .square { background-color: white; grid-column-start: 1; grid-column-end: 3; /* altere de 2 para 3 */ } ``` ![quadrado virando um retangulo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oyf2n3vobdyahxrs7hml.png) Podemos especificar que queremos que o quadrado se estenda até a linha 4. Podemos usar o número `4` ou podemos usar `-1`: ```css .square { background-color: white; grid-column-start: 1; grid-column-end: -1; /* pode usar 4 ou -1 */ } ``` ![quadrado ocupando 3 colunas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pcetm24ci17l2pg6igj.png) Podemos agora especificar que esses quadrados devem começar na linha 2 horizontal: ```css .square { background-color: white; grid-column-start: 1; grid-column-end: -1; grid-row-start: 2; /* adicione */ } ``` ![quadrados posicionados na linha 2 vertical](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elgagqliorvwlf1yffhi.png) ### Posicionando os pontos com `grid-row` e `grid-column` <a name="posicionando-os-pontos-com-grid-row-e-grid-column"> Agora sim vamos para a solução. No arquivo `grid-rows-and-grid-columns/index.html` o código final será esse, ou seja não iremos mais modificar ele: ```html <head> <link rel="stylesheet" href="../main.css"> <link rel="stylesheet" href="style.css"> </head> <body> <div class="first-face"> <div class="pip center middle"></div> </div> <div class="second-face"> <div class="pip"></div> <div class="pip right bottom"></div> </div> <div class="third-face"> <div class="pip"></div> <div class="pip middle center"></div> <div class="pip bottom right"></div> </div> <div class="fourth-face"> <div class="pip"></div> <div class="pip right"></div> <div class="pip bottom"></div> <div class="pip bottom right"></div> </div> <div class="fifth-face"> <div class="pip"></div> <div class="pip right"></div> <div class="pip middle center"></div> <div class="pip bottom"></div> <div class="pip bottom right"></div> </div> <div class="sixth-face"> <div class="pip"></div> <div class="pip right"></div> <div class="pip"></div> <div class="pip right"></div> <div class="pip bottom"></div> <div class="pip bottom right"></div> </div> </body> ``` ![Faces apenas com HTML](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ws9ygg92wm16yrhixhcj.png) E no arquivo `grid-rows-and-grid-columns/style.css` você pode colocar esse código: ```css [class$="face"] { display: grid; grid-template: repeat(3, 1fr) / repeat(3, 1fr); } .top { grid-row-start: 1; } .middle { grid-row-start: 2; } .bottom { /* grid-row-start: 3; */ grid-row-end: -1; } .left { grid-column-start: 1; } .center { grid-column-start: 2; } .right { /* grid-column-start: 3; */ grid-column-end: -1; } ``` O resultado está quase o esperado: ![resultado quase esperado](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jv4bhjz8bzdb6cb5pqc.png) Preciamos centralizar cada ponto em sua célula: ```css .pip { place-self: center; } ``` Agora o resultado ficou perfeito: ![resultado perfeito](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30byar5gium0scl9plde.png) ![sexta face perfeita](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zfhjf9jhiiyb7b83t1w.png) ------------------------------------------------------------------- ## Considerações finais <a name="considerações-finais"> Podemos concluir que o código da segunda abordagem com `grid-template-areas` ficou mais confuso do que as outras abordagens. Já a última abordagem com `grid-rows` e `grid-columns` teve menos código CSS e ficou fácil de entender o que acontece, sendo assim mais fácil de dar manutenção, então vamos para o ranking: * 🥇 1º Lugar: Exemplo 3 (`grid-rows` e `grid-columns`) * 🥈 2º Lugar: Exemplo 1 (Flexbox) * 🥉 3º Lugar: Exemplo 2 (`grid-template-areas`) Era isso... obrigado por ler e até a próxima 🙂👋 ------------------------------------------------------------------- Para a construção deste artigo foi utilizado os seguintes conteúdos como base: * [Getting Dicey With Flexbox](https://davidwalsh.name/flexbox-dice) * [Creating dice using CSS grid](https://dev.to/ekeijl/creating-dice-using-css-grid-j4) * [[css] Designing Dice using CSS Grid](https://youtu.be/HyxtLn8g4qc?si=M4fLtgI0F3ZZIaTu)
maiquitome
1,913,222
Types, Symptoms, and Treatments of Anxiety Disorders
One of the most prevalent mental health issues affecting millions of individuals globally is anxiety...
0
2024-07-05T22:11:35
https://dev.to/james_andrew_4dca4948fa2e/types-symptoms-and-treatments-of-anxiety-disorders-2f0
health, healthcarem
One of the most prevalent mental health issues affecting millions of individuals globally is anxiety disorders. Excessive concern and fear, along with associated behavioral abnormalities, are hallmarks of these diseases. It is crucial to comprehend the different kinds, signs, and therapies of [anxiety disorders](https://www.drjoexplains.com/) in order to effectively manage this illness. **Types of Anxiety Disorders** Generalized Anxiety Disorder (GAD): GAD is characterized by excessive and ongoing worry over a variety of life's circumstances, including job, health, and daily activities. It can be challenging for people with GAD to control their concern, which can interfere with day-to-day tasks. Recurrent, unplanned panic attacks—sudden, acute bouts of terror that may include palpitations, perspiration, shaking, and feelings of impending doom—are the hallmark of panic disorder. These assaults can happen suddenly and cause a great deal of distress. Often referred to as social phobia, social anxiety disorder (SAD) is the extreme fear of being inspected, judged, or embarrassed by others in social circumstances. Avoiding social situations due to this phobia might have negative effects on one's personal and professional life. Particular Fears: Intense, illogical dread of certain things or circumstances, including spiders, flying, or heights, are known as specific phobias. These anxieties can result in avoidance actions since they are typically out of proportion to the real threat. The symptoms of obsessive-compulsive disorder (OCD) include recurrent, unwanted thoughts (called obsessions) and compulsive, repeated behaviors (called compulsions). Even though the activities are frequently illogical in relation to the event, people with OCD participate in them in an attempt to lessen anxiety or avoid a feared occurrence. Post-Traumatic Stress Disorder (PTSD): An assault, an accident, or exposure to a traumatic incident can result in the development of PTSD. Flashbacks, nightmares, excruciating anxiety, and uncontrollably vivid thoughts about the incident are among the symptoms. Causes of Anxiety Anxiety disorders have a wide range of underlying causes, including hereditary, environmental, psychological, and developmental variables. Typical causes include some of the following: Genetics: Anxiety disorders run in families, and this can make a person more likely to experience related symptoms. Brain Chemistry: Mood and anxiety levels can be impacted by imbalances in neurotransmitters like dopamine and serotonin. Environmental Stressors: Anxiety disorders can be brought on by or made worse by traumatic experiences, ongoing stress, and major life transitions like divorce or losing a job. Personality traits: Anxiety is more common in people with certain personality types, such as those with high neuroticism levels. Medical Conditions: Anxiety symptoms may be exacerbated by thyroid difficulties, chronic illnesses, and other health concerns. **Anxiety Symptoms** [Anxiety symptoms](https://www.drjoexplains.com/buy-xanax-alprazolam-online-without-prescription/ ) can vary widely depending on the type of disorder but generally include both emotional and physical manifestations. Common anxiety symptoms are: Symptoms related to emotions: Continuous anxiety or terror Anxiety or a tense feeling inability to concentrate Intolerance preparing for the worst Symptoms in the body: elevated cardiac rhythm or palpitations Breathlessness Perspiration shaky or trembling Weary Tension in the muscles Headaches Lack of sleep **Treatment for Anxiety** Therapy, medication, and lifestyle modifications are often necessary components of an all-encompassing approach for the effective treatment of anxiety. The following are the main forms of treatment: Psychoanalysis: Therapy based on cognitive behavior (CBT): Cognitive behavioral therapy (CBT) is a popular therapy that assists patients in recognizing and addressing the harmful thought patterns and behaviors that fuel anxiety. It works very well for panic disorder, SAD, and GAD. Exposure therapy is a therapeutic approach that aims to decrease avoidance tendencies and anxiety by introducing patients to fearful things or situations gradually and under supervision. Dialectical Behavior Therapy (DBT): Originally designed to treat borderline personality disorder, DBT teaches distress tolerance and emotional regulation techniques that are helpful in managing anxiety. Medication for Anxiety: Depression-fighting drugs: For anxiety disorders, doctors frequently prescribe serotonin-norepinephrine reuptake inhibitors (SNRIs) and selective serotonin reuptake inhibitors (SSRIs). They lessen anxiety symptoms by assisting in the regulation of neurotransmitter levels. Benzodiazepines: Due to the possibility of dependence, these drugs are usually not advised for long-term usage, even if they might temporarily relieve severe anxiety symptoms. Beta-Blockers: These drugs can help control the physical signs of nervousness, such tremors and a fast heartbeat, especially when giving a speech in front of an audience. Buspirone: Compared to benzodiazepines, this medicine may be less harmful for GAD patients and have less side effects. Lifestyle Changes: Exercise: Engaging in regular physical activity can help lower anxiety levels by endorphin release and enhance general wellbeing. Diet: Eating a well-balanced diet high in whole grains, fruits, vegetables, and lean meats will help maintain mental health. Keeping alcohol and caffeine intake low can also help control anxious symptoms. Mindfulness and Relaxation Techniques: Activities that promote mental calmness and anxiety reduction include yoga, meditation, and deep breathing. Sleep: Getting adequate sleep and maintaining excellent sleep hygiene are essential for controlling the symptoms of anxiety. In summary In order to effectively treat anxiety disorders, which are complex illnesses, a multimodal strategy is necessary. The first step in obtaining the right treatment for anxiety is to comprehend its various forms, causes, and symptoms. Adequate therapy, medication, and lifestyle modifications can enable people with anxiety disorders to have happy, successful lives. In order to manage and overcome anxiety, early intervention and a supportive atmosphere are essential.
james_andrew_4dca4948fa2e
1,913,220
most hiring processes are a scam
most hiring processes are a scam, for you Misconception first, you gotta go through a...
0
2024-07-05T22:08:23
https://dev.to/jpbp/most-hiring-processes-are-a-scam-349h
career
most hiring processes are a scam, **for you** ## Misconception first, you gotta go through a "resumé" filtration if you are hired based on this garbage, you’re missing the point of what working is about. you want to work at a company that **you** care about, and the company wants that as well but how they are going to know that if they're focusing on getting hundreds of resumes that mean nothing. they are vague, they have no real-world precise tie. the second step after the filtration it's a "light interview" call, where they check if you are not a complete incompetent who is totally desaligned with the porpuse of the job. ## They take advantage of your energy if you get lucky they might decide to interview you for real and check if you have experience, maybe do some live coding, and then it doesn't extend too much. if not, you are probably already in the middle of a kidnap. now, it's a fight to see who it's able to do the most to get the job. maybe they'll ask you to do the one week challenge they have and compete with other 10 people that are doing the same and don't pass because you already did this 10 times and you are tired of this bs. as funny as it seems, at this point you probably already lost some respect for the people that are hiring you. yeah they may be impressed with your technical and soft skills that you presented to them at a such personalized manner, but now, after this, after putting all your efforts to get to the last step, they can just say: "well, we know you want X as a sallary, but we are going through this and that, can we do X - 35%? ## The solution? I believe the solution it's pretty simple: **simplify**. If you are hiring, add a space for cover letters, make sure to make your offer appealing - this way even if you are willing to make your candidates lose their time and energy, you at least grab more excited people, this requires no money and barely no time. **Trust your gut**. Isn't said that drive it's more valuable than technical skills? So let's add a thing that combine even better with drive, **excitement** and **energy**. If you are wanting to be hired, be more active. Send personalized messages with your experience/skills, be seen, get proof of work, don't fear looking like a fool, have your portfolio up-to-date, build stuff.
jpbp
1,913,216
Building a Real-Time Collaborative Notes Application with htmx 2.0
Hey everyone! 👋 If you caught my blog last week about Remix and React Router, you know I've been...
0
2024-07-05T22:05:54
https://dev.to/sohinip/building-a-real-time-collaborative-notes-application-with-htmx-20-1l4
webdev, htmx, javascript, tutorial
Hey everyone! 👋 If you caught my blog last week about Remix and React Router, you know I've been diving into new technologies over the weekends. This time, I decided to play around with htmx 2.0, and I came up with a neat little project: a real-time collaborative notes application. Let me tell you all about it! 🚀 ### What I Built I built a collaborative notes application where you can add, edit, and delete notes in real-time. The coolest part? All of this is done with minimal JavaScript, thanks to htmx! ### Why htmx 2.0? htmx 2.0 is pretty awesome for a few reasons: - **Partial Page Updates**: You can update specific parts of a webpage without refreshing the whole thing. - **Enhanced Interactivity**: It helps you create interactive web apps with less JavaScript. - **Simplified Data Handling**: Managing data operations dynamically is super easy with htmx. ### Key Features 1. **Real-time Updates**: Add, edit, and delete notes, and see changes instantly. 2. **Minimal JavaScript**: htmx handles dynamic content updates for us. 3. **Server-Side Rendering**: Notes are rendered on the server and updated dynamically on the client side. ### Code Snippets Here are some snippets to show how I used htmx: **Adding a New Note:** ```html <form id="add-note-form" hx-post="/add-note" hx-swap="beforeend:#notes-list" class="add-note-form"> <input type="text" name="title" placeholder="Note Title" required> <textarea name="content" placeholder="Note Content" required></textarea> <button type="submit">Add Note</button> </form> ``` This form uses htmx’s `hx-post` to send the new note to the server and `hx-swap` to append the note to the list without reloading the page. **Editing a Note:** ```html <button class="edit-button" hx-get="/edit-note/${note.id}">Edit</button> ``` The edit button sends a GET request to fetch the edit form for a specific note, allowing in-place editing. **Deleting a Note:** ```html <button class="delete-button" hx-delete="/delete-note/${note.id}">Delete</button> ``` The delete button sends a DELETE request to remove the note, instantly updating the UI. ### Project Structure - **index.html**: The main HTML file. - **server.js**: A simple Express.js server to handle requests. - **style.css**: Basic styling for the application. You can find the complete code for this project on my [GitHub repository](https://github.com/sohinipattanayak/html-notes). ### Conclusion This project was a lot of fun and showcased the power of htmx 2.0 for creating dynamic web applications. I hope you find this project as exciting as I did! Check out the screenshot below to see the app in action: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1d3xv5uexzlrylu2zgl.png) Happy coding! ✨ Sohini
sohinip
1,913,219
Part 3:Comparing All-in-One Architecture, Layered Architecture, and Clean Architecture
In software design, choosing the right architecture is crucial for ensuring maintainability,...
27,935
2024-07-05T22:05:32
https://dev.to/moh_moh701/comparing-all-in-one-architecture-layered-architecture-and-clean-architecture-e28
architecture
In software design, choosing the right architecture is crucial for ensuring maintainability, scalability, and robustness of the application. This article provides a simple comparison between All-in-One architecture, Layered architecture, and Clean Architecture. #### All-in-One Architecture ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tl5ti4tt8i85d6czcxv.jpg) **Overview:** - Contained typically in one (large) Visual Studio project. - Starts with File → New Project. - “Layers” are represented as folders within the project. **Pros:** - Simple deployment process. - Easier to develop initially, as all components are in one place. - Performance can be optimized more easily as there is no inter-service communication overhead. **Cons:** - Can be difficult to maintain as the application grows. - Scalability is limited; scaling one part of the application requires scaling the entire application. - A single bug can potentially bring down the entire application. **Use Case:** - Suitable for small, straightforward applications or startups in the early stages where rapid development is more critical than long-term maintainability. #### Layered Architecture ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/he5361apvnhlszz0853t.jpg) **Overview:** - Split according to concern, dividing the application into layers such as presentation, business logic, data access, and sometimes more. - Each layer has a specific responsibility and interacts only with the adjacent layers. **Pros:** - Promotes reuse of layers across different applications. - Easier to maintain due to clear separation of concerns. - Pluggable, allowing for layers to be swapped or updated independently. **Cons:** - Still involves coupling between layers, which can lead to dependencies. - Behaves as a single application, meaning that issues in one layer can affect the entire application. - Performance can be affected by the overhead of communication between layers. **Use Case:** - Suitable for medium to large applications where separation of concerns is crucial for maintainability and scalability. #### Clean Architecture **Overview:** - Clean Architecture is a design philosophy that emphasizes separation of concerns and independence of frameworks, databases, and other external agencies. - It organizes code into layers that separate the business logic from the infrastructure and user interface. **Pros:** - High maintainability and testability due to clear separation of concerns. - Independence from frameworks and technologies, making it easier to swap out components. - Facilitates scalability and flexibility in responding to changing requirements. **Cons:** - More complex to implement initially compared to simpler architectures. - Requires a deeper understanding of design principles and architectural patterns. - Can introduce overhead in small applications where such rigor is unnecessary. **Use Case:** - Ideal for large, complex applications where long-term maintainability, testability, and flexibility are critical. ### Comparison Summary | Aspect | All-in-One Architecture | Layered Architecture | Clean Architecture | |----------------------|------------------------------|----------------------------|--------------------------| | **Deployment** | Simple | Layered | Modular | | **Scalability** | Limited | Moderate | High | | **Maintainability** | Low | Moderate | High | | **Complexity** | Low | Moderate | High | | **Use Case** | Small applications | Medium to large applications | Large, complex applications | By understanding the strengths and weaknesses of each architecture, you can choose the one that best fits the needs of your application. For a deeper dive into Clean Architecture, you can refer to [this article](https://dev.to/moh_moh701/part-1-what-is-clean-architecture-4bn1). This comparison highlights how each architecture has its place in software development, and the right choice depends on the specific requirements and constraints of your project.
moh_moh701
1,913,217
WQERTY
HI  J I
0
2024-07-05T22:04:36
https://dev.to/ishaan_singhal_f3b6b687f3/wqerty-27fj
HI  J I
ishaan_singhal_f3b6b687f3
1,911,916
Introduction to Functional Programming in JavaScript: High order functions #3
High-order functions and currying are powerful concepts that enable developers to write more modular,...
0
2024-07-05T22:00:00
https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-high-order-functions-3-262d
javascript
High-order functions and currying are powerful concepts that enable developers to write more modular, flexible, and expressive code. These concepts build on the principles of treating functions as first-class citizens and leveraging closures. #### High-Order Functions A high-order function is a function that either takes one or more functions as arguments, returns a function, or both. High-order functions are a cornerstone of functional programming because they allow for greater abstraction and code reuse. ##### Examples of High-Order Functions 1. **Functions as Arguments** ```javascript const numbers = [1, 2, 3, 4, 5]; const filter = (arr, fn) => { const result = []; for (const item of arr) { if (fn(item)) { result.push(item); } } return result; }; const isEven = (num) => num % 2 === 0; console.log(filter(numbers, isEven)); // [2, 4] ``` In this example, `filter` is a high-order function that takes an array and a function (`isEven`) as arguments. The `isEven` function is applied to each element of the array to filter out the even numbers. 2. **Functions as Return Values** ```javascript const createGreeter = (greeting) => { return (name) => `${greeting}, ${name}!`; }; const sayHello = createGreeter('Hello'); console.log(sayHello('Alice')); // 'Hello, Alice!' console.log(sayHello('Bob')); // 'Hello, Bob!' ``` Here, `createGreeter` is a high-order function that returns a new function. The returned function uses the `greeting` parameter from its outer scope, demonstrating how closures are used in high-order functions. 3. **Built-in High-Order Functions** JavaScript provides several built-in high-order functions, such as `map`, `filter`, and `reduce`. ```javascript const numbers = [1, 2, 3, 4, 5]; const doubled = numbers.map(num => num * 2); console.log(doubled); // [2, 4, 6, 8, 10] const evens = numbers.filter(num => num % 2 === 0); console.log(evens); // [2, 4] const sum = numbers.reduce((total, num) => total + num, 0); console.log(sum); // 15 ``` These functions enable concise and expressive operations on arrays, reducing the need for imperative loops and enhancing code readability. #### Currying Currying is a technique of transforming a function that takes multiple arguments into a sequence of functions, each taking a single argument. This allows for the creation of specialized functions from general ones and facilitates function composition. ##### Examples of Currying 1. **Basic Currying** ```javascript const add = (a) => (b) => a + b; const addFive = add(5); console.log(addFive(3)); // 8 console.log(addFive(10)); // 15 ``` In this example, `add` is a curried function. The first call to `add` with argument `5` returns a new function that adds `5` to its argument. This demonstrates how currying can create specialized functions. 2. **Currying with Multiple Arguments** ```javascript const multiply = (a) => (b) => (c) => a * b * c; console.log(multiply(2)(3)(4)); // 24 ``` Here, `multiply` is a curried function that takes three arguments one at a time. This approach enables partial application of functions, where some arguments are fixed early, and the remaining arguments are supplied later. 3. **Practical Currying with Utility Libraries** JavaScript utility libraries like Lodash provide convenient methods for currying functions. ```javascript const _ = require('lodash'); const add = (a, b, c) => a + b + c; const curriedAdd = _.curry(add); console.log(curriedAdd(1)(2)(3)); // 6 console.log(curriedAdd(1, 2)(3)); // 6 ``` Using Lodash's `curry` method, we can easily transform a function into its curried form, allowing for flexible argument application. #### Benefits of High-Order Functions and Currying - **Code Reusability**: High-order functions and currying promote code reuse by allowing generic functions to be easily customized for specific use cases. - **Modularity**: Breaking down functions into smaller, composable pieces enhances modularity, making code easier to maintain and understand. - **Function Composition**: High-order functions and currying facilitate function composition, enabling developers to build complex operations by combining simpler functions.
francescoagati
1,913,215
User Management Automation With BASH SCRIPT
Introduction Managing user accounts and groups on Linux systems can indeed be...
0
2024-07-05T21:58:55
https://dev.to/tophe/user-management-automation-with-bash-script-9n4
devops, linux
# Introduction Managing user accounts and groups on Linux systems can indeed be time-consuming, especially when dealing with multiple users. As a SysOps Engineer, you can simplify this process by creating a Bash script that automates user and group management. The script can read user and group information from a file, create users, assign them to groups, and set passwords. Let's explore the step-by-step process of achieving this automation. This task is courtesy of HNG, an internship program designed to enhance your programming knowledge across various domains. You can find more information about HNG on their website: [HNG Internship](https://hng.tech/internship). Now, let's dive into the details! 🚀🔍 ##### Why automate? Have you ever performed a long and complex task at the command line and thought, "Glad that's done. Now I never have to worry about it again!"? I have—frequently. I ultimately figured out that almost everything that I ever need to do on a computer will need to be done again sometime in the future. --- #### Prerequisite 1. Basic knowledge of Linux command line 2. Text Editor --- #### Script Code ``` #!/bin/bash # automating user account creation # Check if the script is run with the input file argument if [ -z "$1" ]; then echo "Usage: sudo $0 <name-of-text-file>" exit 1 fi # Input file (usernames and groups) input_file="$1" # Log file log_file="/var/log/user_management.log" # Secure password storage file password_file="/var/secure/user_passwords.txt" # Create secure directory sudo mkdir -p /var/secure sudo chmod 700 /var/secure sudo touch "$password_file" sudo chmod 600 "$password_file" # Function to generate a random password generate_password() { openssl rand -base64 12 } # Read input file line by line while IFS=';' read -r username groups; do # Skip empty lines or lines that don't have the proper format [[ -z "$username" || -z "$groups" ]] && continue # Create groups if they don't exist for group in $(echo "$groups" | tr ',' ' '); do sudo groupadd "$group" 2>/dev/null || echo "Group $group already exists" done # Create user if not exists if id "$username" &>/dev/null; then echo "User $username already exists" echo "$(date '+%Y-%m-%d %H:%M:%S') - User $username already exists" | sudo tee -a "$log_file" > /dev/null else sudo useradd -m -s /bin/bash -G "$groups" "$username" || { echo "Failed to add user $username"; continue; } # Set password for newly created user password=$(generate_password) echo "$username:$password" | sudo chpasswd || { echo "Failed to set password for $username"; continue; } # Log actions echo "$(date '+%Y-%m-%d %H:%M:%S') - Created user $username with groups: $groups" | sudo tee -a "$log_file" > /dev/null # Store password securely echo "$username:$password" | sudo tee -a "$password_file" > /dev/null fi done < "$input_file" echo "$(date '+%Y-%m-%d %H:%M:%S') - User management process completed." | sudo tee -a "$log_file" > /dev/null ``` #### Script Overview The script performs the following tasks: 1. Creates two file, a log file to store logs and the other to store user's password. 1. Set right permission for both files. 1. Reads a list of users and groups from a file. 1. Creates users and assigns them to specified groups. 1. Generates random passwords for each newly created user. 1. Logs all actions to /var/log/user_management.log. 1. Stores the generated passwords securely in /var/secure/user_passwords.txt. #### Key Features Automated User and Group Creation: The script automates the creation of users and their respective groups by reading from a file containing user and group information. Personal groups are created for each user to ensure clear ownership and enhanced security. Users can be assigned to multiple groups, facilitating organized and efficient permission management. #### Secure Password Generation: The script generates random passwords for each user, enhancing security. Passwords are securely stored in a file with restricted access, ensuring that only authorized personnel can view them. #### Logging and Documentation: Actions performed by the script are logged to a file, providing an audit trail for accountability and troubleshooting. #### Usage: 1 Input File: The script takes an input file containing the list and users and groups they are to be added. it is formatted as user;groups #### Conclusion Automating user and group management with a bash script is a very good way to streamline administrative tasks and ensure consistency across a system. In this module, we have demonstrated how to create a script that reads user and group information from a file, creates users, group and sets password while logging the entire process into a log file. This script can be modified and adapted into different environment and requirements making it a versatile tool for system administrators. Here's a link to my script: [here](https://github.com/The-Olatunji/User-management-Bash-Scripting.git)
tophe
1,913,214
21345tyGood awsefg12
So this is how it go So this is how it will go So this is how it will go  ji WORK NOW I THINK and you...
0
2024-07-05T21:50:57
https://dev.to/ishaan_singhal_f3b6b687f3/21345tygood-awsefg12-3p3a
So this is how it go So this is how it will go So this is how it will go&nbsp;<div><br></div><div><br></div><img src="https://res.cloudinary.com/dlnuvrqki/image/upload/v1720216123/kwhyeoch6f9o9rwwb2ve.png" alt="Editor Media" class="mt-4 max-w-xs h-auto mx-auto" style="max-width: 30%;"><div>ji&nbsp;</div><div><br></div>WORK NOW I THINK&nbsp;<div><br></div><div>and you should have some space</div>
ishaan_singhal_f3b6b687f3
1,913,213
How to Connect MLX90614 Infrared to Raspberry Pi Pico
In this blog post, we’ll guide you through connecting the MLX90614 infrared temperature sensor to...
0
2024-07-05T21:50:53
https://dev.to/shilleh/how-to-connect-mlx90614-infrared-to-raspberry-pi-pico-44gc
micropython, raspberrypi, beginners, iot
{% embed https://www.youtube.com/watch?v=ckBF22AxZeg %} In this blog post, we’ll guide you through connecting the MLX90614 infrared temperature sensor to a Raspberry Pi Pico W using MicroPython. The MLX90614 sensor allows for non-contact temperature measurements, making it ideal for various applications. We’ll provide a step-by-step tutorial, including wiring, code, and testing. **Before we delve into the topic, we invite you to support our ongoing efforts and explore our various platforms dedicated to enhancing your IoT projects:** - **Subscribe to our YouTube Channel:** Stay updated with our latest tutorials and project insights by subscribing to our channel at YouTube — Shilleh. - **Support Us:** Your support is invaluable. Consider buying me a coffee at Buy Me A Coffee to help us continue creating quality content. - Hire Expert IoT Services: For personalized assistance with your IoT projects, hire me on UpWork. - **ShillehTek Website (Exclusive Discounts):** [https://shillehtek.com/collections/all](https://shillehtek.com/collections/all) **ShillehTekAmazon Store for MLX90614 Pre-Soldered:** [ShillehTek Amazon Store — US](https://www.amazon.com/stores/page/F0566360-4583-41FF-8528-6C4A15190CD6?channel=yt) [ShillehTek Amazon Store — Canada](https://www.amazon.ca/stores/page/036180BA-2EA0-4A49-A174-31E697A671C2?channel=canada) [ShillehTek Amazon Store — Japan](https://www.amazon.co.jp/stores/page/C388A744-C8DF-4693-B864-B216DEEEB9E3?channel=japan) **Components Needed** - Raspberry Pi Pico W - MLX90614 Infrared Temperature Sensor - Breadboard (optional) - 4 Jumper wires **Wiring Diagram** Connect the MLX90614 to the Raspberry Pi Pico W as follows: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0g5t2d98k48ycs8sacc.png) **MLX90614 VIN to Pico W 3.3V MLX90614 GND to Pico W GND MLX90614 SCL to Pico W GP1 (SCL) MLX90614 SDA to Pico W GP0 (SDA)** **Library Code** Save the following library code in mlx90614.py and upload it to your Raspberry Pi Pico W. ``` """ MicroPython MLX90614 IR temperature sensor driver https://github.com/mcauser/micropython-mlx90614MIT License Copyright (c) 2016 Mike CauserPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """import ustructclass SensorBase:def read16(self, register): data = self.i2c.readfrom_mem(self.address, register, 2) return ustruct.unpack('<H', data)[0]def read_temp(self, register): temp = self.read16(register); # apply measurement resolution (0.02 degrees per LSB) temp *= .02; # Kelvin to Celsius temp -= 273.15; return temp;def read_ambient_temp(self): return self.read_temp(self._REGISTER_TA)def read_object_temp(self): return self.read_temp(self._REGISTER_TOBJ1)def read_object2_temp(self): if self.dual_zone: return self.read_temp(self._REGISTER_TOBJ2) else: raise RuntimeError("Device only has one thermopile")@property def ambient_temp(self): return self.read_ambient_temp()@property def object_temp(self): return self.read_object_temp()@property def object2_temp(self): return self.read_object2_temp()class MLX90614(SensorBase):_REGISTER_TA = 0x06 _REGISTER_TOBJ1 = 0x07 _REGISTER_TOBJ2 = 0x08def __init__(self, i2c, address=0x5a): self.i2c = i2c self.address = address _config1 = i2c.readfrom_mem(address, 0x25, 2) _dz = ustruct.unpack('<H', _config1)[0] & (1<<6) self.dual_zone = True if _dz else Falseclass MLX90615(SensorBase):_REGISTER_TA = 0x26 _REGISTER_TOBJ1 = 0x27def __init__(self, i2c, address=0x5b): self.i2c = i2c self.address = address self.dual_zone = False ``` Save the following main code in a file named main.py and upload it to your Raspberry Pi Pico W. ``` import time import machine from mlx90614 import MLX90614# Initialize I2C bus i2c = machine.I2C(0, scl=machine.Pin(1), sda=machine.Pin(0), freq=100000)# Scan for I2C devices devices = i2c.scan()if devices: print("I2C devices found:", [hex(device) for device in devices]) else: print("No I2C devices found")# Initialize the MLX90614 sensor sensor = MLX90614(i2c)while True: ambient_temp = sensor.ambient_temp object_temp = sensor.object_tempprint(f"Ambient Temperature: {ambient_temp:.2f}°C") print(f"Object Temperature: {object_temp:.2f}°C") time.sleep(1) ``` **Running the Code** Upload both mlx90614.py and main.py to your Raspberry Pi Pico W. Run the main.py script using Thonny IDE or another suitable MicroPython environment. **Checking the Output** After running the script, you should see the ambient and object temperatures printed in the console every second. Ensure the temperatures are within realistic ranges to confirm the sensor is working correctly. You can point it at your skin closely to see if it changes the temperature reading. Goodluck on your project! ## Conclusion By following this guide, you can easily connect the MLX90614 infrared temperature sensor to your Raspberry Pi Pico W and read temperature data using MicroPython. This setup allows for non-contact temperature measurements, which can be useful in various projects. Do not forget to subscribe or book a consulting slot on buymeacoffee if you have any questions!
shilleh
1,913,212
21345tyGood awsefg1
So this is how it go So this is how it will go So this is how it will go  ji WORK NOW I THINK 
0
2024-07-05T21:50:23
https://dev.to/ishaan_singhal_f3b6b687f3/21345tygood-awsefg1-1f4b
So this is how it go So this is how it will go So this is how it will go&nbsp;<div><br></div><div><br></div><img src="https://res.cloudinary.com/dlnuvrqki/image/upload/v1720216123/kwhyeoch6f9o9rwwb2ve.png" alt="Editor Media" class="mt-4 max-w-xs h-auto mx-auto" style="max-width: 30%;"><div>ji&nbsp;</div><div><br></div>WORK NOW I THINK&nbsp;
ishaan_singhal_f3b6b687f3
1,913,211
21345tyGood awsefg
So this is how it will go So this is how it will go So this is how it will go So this is how it will...
0
2024-07-05T21:48:50
https://dev.to/ishaan_singhal_f3b6b687f3/21345tygood-awsefg-552i
So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go So this is how it will go&nbsp;<div><br></div><div><br></div><div class="skeleton-image mt-4 w-48 h-32 bg-gray-700 animate-pulse mx-auto"></div>
ishaan_singhal_f3b6b687f3
1,913,210
Render Angular Components in Markdown
This example demonstrates renderring of markdown in angular and also how to render angular...
0
2024-07-05T21:46:54
https://dev.to/shhdharmen/render-angular-components-in-markdown-496l
angular, markdown, markedjs, highlightjs
--- title: Render Angular Components in Markdown published: true description: tags: angular,markdown,markedjs,highlightjs cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rz5fcawiy5rrf8vyp5sb.png --- > This example demonstrates renderring of markdown in angular and also how to render angular components in markdown. First, we will setup `<markdown-render>` component to render `.md` files. And then we will look how to render angular components. ## Markdown Renderer Install needed dependencies: ```bash npm i highlight.js marked marked-highlight ``` ### Step 1: Create `markdown-renderer/highlight-code-block.ts` This function will be used to highlight code in our markdown file ```ts import highlightJs from 'highlight.js'; export function highlightCodeBlock(code: string, language: string | undefined) { if (language) { return highlightJs.highlight(code, { language, }).value; } return code; } ``` ### Step 2: Create `markdown-renderer/transform-markdown.ts` This function will be used to convert markdown to html. ```ts import { marked } from 'marked'; import { markedHighlight } from 'marked-highlight'; import { highlightCodeBlock } from './highlight-code-block'; marked.use(markedHighlight({ highlight: highlightCodeBlock })); export const markdownToHtml = (content: string) => { return marked(content); }; ``` ### Step 3: Create `markdown-renderer/markdown.service.ts` This service will be used in the component to read `.md` file from local or external location and then convert it to html. ```ts import { HttpClient } from '@angular/common/http'; import { Injectable, inject } from '@angular/core'; import { map } from 'rxjs'; import { markdownToHtml } from './transform-markdown'; @Injectable({ providedIn: 'root', }) export class MarkdownService { private httpClient = inject(HttpClient); htmlContent(src: string) { return this.httpClient.get(src, { responseType: 'text' }).pipe( map((markdownContent) => { return markdownToHtml(markdownContent); }) ); } } ``` ### Step 4: Create `markdown-renderer/markdown-renderer.ts` Finally, this will be out component which we can use to render markdown files. ```ts import { Component, ElementRef, effect, inject, input } from '@angular/core'; import { MarkdownService } from './markdown.service'; import { take } from 'rxjs'; import highlightJs from 'highlight.js'; @Component({ selector: 'markdown-renderer', template: 'Loading document...', standalone: true, }) export class MarkdownRendererComponent { src = input.required<string>(); textContent = ''; private _elementRef = inject<ElementRef>(ElementRef); private markdownService = inject(MarkdownService); constructor() { effect(() => { const src = this.src(); this.setDataFromSrc(src); }); } setDataFromSrc(src: string) { this.markdownService .htmlContent(src) .pipe(take(1)) .subscribe((htmlContent) => { this.updateDocument(htmlContent as string); }); } updateDocument(rawHTML: string) { this._elementRef.nativeElement.innerHTML = rawHTML; this.textContent = this._elementRef.nativeElement.textContent; highlightJs.highlightAll(); } } ``` ### Step 5: Provide HTTP ```ts bootstrapApplication(App, { providers: [ provideHttpClient(withFetch()) ], }); ``` ### Step 6: Usage Now, wherever we want to render markdown, we will simply use `<markdown-renderer>`: ```ts import { Component } from '@angular/core'; import { MarkdownRendererComponent } from './markdown-renderer/markdown-renderer'; @Component({ selector: 'article', standalone: true, template: `<markdown-renderer src="/assets/article.md"></markdown-renderer>`, imports: [MarkdownRendererComponent], }) export class ArticleComponent {} ``` ## Angular Components in Markdown Install needed dependencies: ```bash npm i @angular/elements ``` ### Step 1: Create `custom-elements.service.ts` This service will used to convert angular components to [custom elements](https://developer.mozilla.org/docs/Web/Web_Components/Using_custom_elements), so that we can easily use angular components in in `.md` file. ```ts import { inject, Injectable, Injector } from '@angular/core'; import { createCustomElement } from '@angular/elements'; import { SubscribeComponent } from './components/subscribe'; import { CounterComponent } from './components/counter'; @Injectable({ providedIn: 'root' }) export class CustomElementsService { private _injector = inject(Injector); setupCustomElements() { const subscribeElement = createCustomElement(SubscribeComponent, { injector: this._injector, }); customElements.define('subscribe-component', subscribeElement); const counterElement = createCustomElement(CounterComponent, { injector: this._injector, }); customElements.define('counter-component', counterElement); } } ``` ### Step 2: Call `setupCustomElements` through `APP_INITIALIZER` As we want custom elements present from the initialization, we will use `APP_INITIALIZER`. ```ts bootstrapApplication(App, { providers: [ provideHttpClient(withFetch()), { provide: APP_INITIALIZER, useFactory: initializeCustomElements, multi: true, deps: [CustomElementsService], }, ], }); ``` ### Step 3: Usage Finally, you can simply use your custom element in `.md` file it will render the angular component, like below: ```md <subscribe-component></subscribe-component> <counter-component></counter-component> ``` ## Code {% embed https://stackblitz.com/edit/stackblitz-starters-rgrpl6?file=src%2Fassets%2Farticle.md %} ## Support free content creation Even though the courses and articles are available at no cost, your support in my endeavor to deliver top-notch educational content would be highly valued. Your decision to contribute aids me in persistently improving the course, creating additional resources, and maintaining the accessibility of these materials for all. I'm grateful for your consideration to contribute and make a meaningful difference! [![🙏 Support free content creation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85uoalkeh2bfmz3b7yz1.png)](https://donate.stripe.com/00g17jgeYb1d2SA7sw?utm_source=devto)
shhdharmen
1,913,208
Sintayew4/keplr-chain-registry/draft/exciting-turing
A post by Sintayew Gashaw Ali
0
2024-07-05T21:33:32
https://dev.to/sintayew4/sintayew4keplr-chain-registrydraftexciting-turing-2fcm
codesandbox
{% codesandbox zpf3kn %}
sintayew4
1,913,207
🎨 Day 18: Mastering Layers in Figma 🎨
👋 Hey, Design Enthusiasts! I'm Prince Chouhan, an aspiring UI/UX designer, here to share insights...
0
2024-07-05T21:32:48
https://dev.to/prince_chouhan/day-18-mastering-layers-in-figma-1mob
ui, uidesign, uiweekly, ux
👋 Hey, Design Enthusiasts! I'm Prince Chouhan, an aspiring UI/UX designer, here to share insights on managing and editing layers in Figma. Let's dive in! 🚀 🌟 Learning Highlights: Understanding how each shape or asset imported into Figma creates a new layer. Importance of layer organization for design clarity and functionality. Exploring the concept of parenting in design and its impact on layout management. 🚀 Key Takeaways: Layers dictate the visibility and stacking order of elements in your design. Parent-child relationships influence how grouped elements behave within a frame. Techniques to reorder layers and their icons for different element types. 🔧 Detailed Process: We started by adding an iPhone 14 frame to our canvas, illustrating how new layers are created upon adding elements. We explored layer icons and their significance in distinguishing between various element types. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a9ti8p7je0evrt5zrmmc.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q6l3s1agkzi6r0y5za5.png) 🧩 Challenges: Understanding layer hierarchy and effectively managing complex layouts can be daunting initially. 💡 Practical Application: Mastering layer management enhances design efficiency, allowing for seamless adjustments and revisions. 🔍 In-Depth Analysis: The session emphasized the significance of layer order in design composition and the role of grouping for cohesive element management. 🌐 Community Engagement: 👉 What challenges have you faced while managing layers in your design projects? Share your tips! 🌟 Quote of the Day: "Effective layer management is the cornerstone of streamlined design workflows." - Prince Chouhan #UIDesign #UXDesign #Figma #DesignTips #LearningJourney #GraphicDesign #DesignCommunity #FigmaTips #DesignTools #ImageEditing #CreativeDesign #UIUXDesign #DigitalArt #DesignLife #DesignerLife #UIUXInspiration #DesignProcess #UserExperience #InterfaceDesign #VisualDesign #TechDesign #PrinceChouhan
prince_chouhan
1,913,199
Building a CRUD API with FastAPI and MongoDB
Welcome, fellow developers! In this blog post, we'll dive into the exciting world of building RESTful...
0
2024-07-05T21:30:11
https://dev.to/aquibpy/building-a-crud-api-with-fastapi-and-mongodb-32n
python, mongodb, fastapi, learning
Welcome, fellow developers! In this blog post, we'll dive into the exciting world of building RESTful APIs with FastAPI, a modern, fast, and easy-to-use framework, and MongoDB, a powerful NoSQL database. We'll create a simple CRUD (Create, Read, Update, Delete) API for managing a collection of books. Let's get started! **1. Setting the Stage:** - **Install Dependencies:** Begin by installing the required libraries: ```bash pip install fastapi uvicorn pymongo[srv] ``` - **Create Project Structure:** Organize your project with a clear structure: ``` my-book-api/ ├── main.py ├── models.py └── schemas.py ``` **2. Defining Our Book Model:** We'll start by defining our book model in `models.py`. This model represents the structure of our book data: ```python from pydantic import BaseModel class Book(BaseModel): title: str author: str description: str published_year: int class Config: schema_extra = { "example": { "title": "The Hitchhiker's Guide to the Galaxy", "author": "Douglas Adams", "description": "A humorous science fiction novel.", "published_year": 1979 } } ``` **3. Database Connection:** In `main.py`, we'll establish a connection to our MongoDB database: ```python from fastapi import FastAPI from pymongo import MongoClient app = FastAPI() client = MongoClient("mongodb+srv://<your_username>:<your_password>@<your_cluster>.mongodb.net/<your_database>?retryWrites=true&w=majority") db = client["my_book_database"] collection = db["books"] ``` **4. Creating the CRUD Operations:** Now, let's define our CRUD endpoints within `main.py`: **4.1. Creating a Book (POST):** ```python @app.post("/books", response_model=Book) async def create_book(book: Book): book_dict = book.dict() collection.insert_one(book_dict) return book ``` **4.2. Reading Books (GET):** ```python @app.get("/books") async def get_books(): books = [] for book in collection.find(): books.append(Book(**book)) return books ``` **4.3. Getting a Specific Book (GET):** ```python @app.get("/books/{book_id}", response_model=Book) async def get_book(book_id: str): book = collection.find_one({"_id": ObjectId(book_id)}) if book: return Book(**book) else: raise HTTPException(status_code=404, detail="Book not found") ``` **4.4. Updating a Book (PUT):** ```python @app.put("/books/{book_id}", response_model=Book) async def update_book(book_id: str, book: Book): book_dict = book.dict() collection.update_one({"_id": ObjectId(book_id)}, {"$set": book_dict}) return book ``` **4.5. Deleting a Book (DELETE):** ```python @app.delete("/books/{book_id}") async def delete_book(book_id: str): collection.delete_one({"_id": ObjectId(book_id)}) return {"message": "Book deleted successfully"} ``` **5. Running the API:** Finally, run the API using `uvicorn`: ```bash uvicorn main:app --reload ``` **6. Testing the API:** You can test your API using tools like Postman or curl. For example, to create a new book: ```bash curl -X POST -H "Content-Type: application/json" -d '{"title": "The Lord of the Rings", "author": "J.R.R. Tolkien", "description": "A classic fantasy novel.", "published_year": 1954}' http://localhost:8000/books ``` **Tips and Tricks:** - **Use a .env file:** Store sensitive information like database credentials in a `.env` file. - **Implement error handling:** Handle potential errors with appropriate error messages and status codes. - **Document your API:** Use tools like Swagger or OpenAPI to create documentation for your API. - **Consider using a database driver for more robust connections and operations.** **Conclusion:** Creating CRUD operations with FastAPI and MongoDB is a simple yet powerful way to build RESTful APIs. By following these steps, you can quickly get your API up and running. Remember to explore additional features like pagination, filtering, and authentication for a more robust and user-friendly API. Happy coding!
aquibpy
1,913,206
Vite + Github Actions + One Build many deploy
Hi - Am working on a vite project where I need to deploy the static build to multiple environments....
0
2024-07-05T21:27:00
https://dev.to/nv_conqueror/vite-github-actions-one-build-many-deploy-1m2h
vite
Hi - Am working on a vite project where I need to deploy the static build to multiple environments. am using github actions. looking at the vite build, all env variables are injected during build time and had to build a package for each environment and then deploy which is not making sense. does anyone have any inputs on how to create one build and leverage for multiple environments. deploying to cloudfoundry - fyi. i did try unpacking and rebuilding and redeploying to cf but the configuration picks the file from artifactory ignoring the custom file. any pointers are appreciated.
nv_conqueror
1,913,203
Resolve conflicts during Git merge and rebase
Say, you have main &amp; feature as 2 branches. Merge conflicts If you want to merge...
0
2024-07-05T21:25:18
https://dev.to/vishal_bhavsar/resolve-conflicts-during-git-merge-and-rebase-3fij
Say, you have `main` & `feature` as 2 branches. ## Merge conflicts If you want to merge `feature` branch into `main`, run following command. ``` git checkout main git merge feature ``` This can result in merge conflicts. To discard all changes in `feature` and accept everything on `main`, run following command. ``` git merge -Xours feature ``` To accept changes from `feature` branch, run below command. ``` git merge -Xtheirs feature ``` ## Rebase conflicts If you want to rebase `feature` branch onto `main`, run below command. ``` git checkout feature git rebase main ``` Again, this can result in merge conflicts. To accept the changes in `feature` branch, run below command. ``` git rebase main -Xtheirs ``` To accept the changes in `main` branch, run below command. ``` git rebase main -Xours ```
vishal_bhavsar
1,911,197
How to Create a Virtual Machine Scale Set in Azure
Creating a Virtual Machine Scale Set (VMSS) in Azure allows you to manage and automatically scale a...
0
2024-07-05T21:21:43
https://dev.to/florence_8042063da11e29d1/how-to-create-a-virtual-machine-scale-set-in-azure-334m
virtualmachinescaleset, vmss, azure, virtualmachine
Creating a Virtual Machine Scale Set (VMSS) in Azure allows you to manage and automatically scale a group of [virtual machines](https://dev.to/florence_8042063da11e29d1/step-by-step-guide-to-create-deploy-and-connect-to-a-virtual-machine-on-azure-1b00). Here's a step-by-step guide to create a VMSS using the Azure portal: ###Step 1: Sign in to Azure Portal### Open your web browser and go to the Azure portal. Sign in with your [Azure](https://dev.to/florence_8042063da11e29d1/core-architectural-components-of-azure-all-you-need-to-know-2n5k) account credentials. ###Step 2: Navigate to Virtual Machine Scale Sets### In the Azure portal, click the **"Create a resource"** button (+) in the left-hand menu. In the **"Search the Marketplace"** box, type **"Virtual Machine Scale Sets"** and select it from the list. ![Type and select Virtual Machine Scale Set in the search box on Azure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1y9xg3avcnr2usvefiu.png) Click "Create" to start the creation process. ![Click on create](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93862rh32jkpa2eksx1k.png) ###Step 3: Configure Basic Settings### **Subscription**: Select your Azure subscription. **Resource Group**: Select an existing resource group or create a new one. **Name**: Enter a name for your scale set. **Region**: Choose the region where you want to deploy the VMSS. **Availability Zone**: (Optional) Select an availability zone if required. ![select or create resource group, name, region, and availability zone](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fu2o5mevjfkolohut8ql.png) **Orchestration mode**: Choose Uniform (recommended for most scenarios) or Flexible. **Scaling mode**: Choose **Autoscaling**, there are other options. For **Scaling configuration** click on **Configure** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1t5ldclwbjkf26o5nrx.png) ###Step 4: Scaling Configuration### **Scaling conditions**: Click on **Add a Scaling Condition** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/td0wf5razdg758hzx31r.png) **Configure Scaling Settings** **Instance Count**: Set the initial number of instances (e.g., 2). **Scaling Policy**: Configure the scaling policy to automatically increase or decrease the number of instances based on CPU usage, memory, or custom metrics. Select the conditions, then click on **Save** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6opg9qo24qkzdfx3pavv.png) This takes us back to **Scaling Configuration** where the scale condition added appears. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p71bhr7u44izlcsw2uc7.png) Click on **Save**, which takes us back to where we started to **Create a virtual machine scale set** the next step is to configure instance details ###Step 5: Configure Instance Details### **Image**: Select an operating system image for your VMs (e.g., Ubuntu Server 20.04 LTS). **Size**: Choose a VM size (e.g., Standard DS1 v2). For **Authentication type** select **SSH public key** ###Step 6: Configure Networking### To **edit the network interface**, click on the icon. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs2czy5ta6f23tczaz1t.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hd0l0sp7lixg28xia3m0.png) **Select inbound ports**: Select **HTTP (80), SSH (22) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/odkklc3i7pg6iu2po8af.png) Also enable **Public IP address** and **Accelerated networking** then click on **Ok** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v8veu9n5zu3sssmwqgi0.png) Select network interface and on the **Load balancing options** select **Azure load balancer** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8y6pm53xkamnljx6rhx.png) On **Select a load balancer**, if there is no existing load balancer, click on **Create a load balancer**. A load balancer helps to distribute traffic across the VMs. After creating a load balancer, click on **Review and create** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcei2mvxh8zqfon0yzvb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsrb68tfne3ij6yye430.png) ###Step 7: Download Private Key### ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06o5dp9p0wa41jty9tth.png) When deployment is complete, click on **Go to resource** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lofi38evlvbcn9jwvpq.png) Click on **Networking** then **Load balancing** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tziyol5dh2rltlnl3b2.png) **Copy IP address** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ciwymrbzhctt8fi63i89.png) Go to **Command prompt** and **Run as administrator** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdmlnvziw06jndil1g09.png) In the **Command prompt** type in "ssh -i", the file path of the **Download private key** in your file manager, username, and IP address ###Step 8: Install Web Server### To install web server, type in **sudo apt-get update** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v94nlvsbn65u1ycnyq11.png) Type **sudo apt-get install nginx -y** in the next prompt ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bklc0a9oib4ht4ct1ew.png) View of the **Web Server** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ac2avm9qk6ize82r87a.png) ###Step 9: Monitor### Use the "Instances" tab to monitor the status of individual VM instances. Click on **Monitoring**, and then **Metrics** to monitor the workload ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rrloblrppgljmgcl6n6.png) Use the "Scaling" tab to adjust scaling policies and settings. Conclusion By following these steps, you can create a Virtual Machine Scale Set in Azure, which allows you to automatically scale your application based on demand, ensuring high availability and performance.
florence_8042063da11e29d1
1,913,201
Adding Mermaid diagrams to Astro MDX
Adding Mermaid diagrams to Astro pages has a surprising number of challenges. Most of the solutions...
0
2024-07-05T21:17:34
https://xkonti.tech/blog/astro-mermaid-mdx/
astro, webdev, markdown, mermaid
Adding [Mermaid diagrams](https://mermaid.js.org/) to Astro pages has a surprising number of challenges. Most of the solutions out there rely on using headless browsers to render the diagrams. This approach has a few drawbacks: - You need to install a headless browser on your machine to be able to build your site - It might prevent your site from building on CI/CD (like Cloudflare Pages) - It might slow down your site's build time significantly The reason for the popularity of this approach is that Mermaid.js relies on browser APIs to lay the diagrams out. Astro doesn't have access to these APIs, so it can't render the diagrams directly. Fortunately, there's another option: rendering the diagrams on the client side. Definitely not ideal, as suddenly our pages won't be fully pre-rendered, but in case only some of your pages have diagrams, it's still a viable option. Especially that it doesn't require any additional setup. ## High level overview The idea is to use the [official mermaid package](https://www.npmjs.com/package/mermaid) directly and let it render the diagrams on the client side. In my case I want to have a diagram as a separate component to add some additional functionality, like the ability to display the diagram's source code. This decision has one side effect: **The component won't work in pure Markdown files. It will only work in MDX files.** To make it work in pure Markdown files, one would need to create a rehype/remark plugin, but I didn't feel like it was worth the effort - I use MDX files for everything as it provides more functionality. ## Building the component First we need to install the mermaid package: ```bash # npm npm install mermaid # pnpm pnpm add mermaid ``` Now let's create the component. It will be an Astro component as we don't need any additional framework functionality for this. Let's call it `Mermaid.astro` - I placed in in `stc/components/markdown` folder: ```html --- export interface Props { title?: string; } const { title = "" } = Astro.props; --- <script> import mermaid from "mermaid"; </script> <figure> <figcaption>{title}</figcaption> <pre class="mermaid not-prose"> <slot /> </pre> </figure> ``` Nothing special here: 1. We make the component accept a `title` prop so that we can display a nice title - relying on mermaid's built-in titles itn't optimal as the title will show up in various sizes depending on the diagram's size. 2. We add a script that will import the mermaid package on the client side. It's worth noting that Astro will include that script only once on the page no matter how many times we use the component. Simply including the `mermaid` will register a `DOMContentLoaded` event listener for the mermaid renderer. 3. The mermaid renderer looks through the entire page for `<pre>` elements with the `mermaid` class. Including it here will ensure that the diagram code will be processed by mermaid. In my case I also need to add the `not-prose` class to remove some conflicts with my markdown styling. 4. The `<slot />` element will be replaced with the mermaid code wrapped by this component. Now let's try to use it in an MDX file: ```md --- title: Testing mermaid in Astro --- import Mermaid from "@components/markdown/Mermaid.astro"; <Mermaid title="Does it work?"> flowchart LR Start --> Stop </Mermaid> ``` And the results is: ![Screenshot showing mermaid complaining about a syntax error](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lv7cmzkrv4ff3s7f912.png) This is where inspecting the page source comes in handy. This way we can see what Astro rendered before mermaid tried to process it: ```html <figure> <figcaption>Does it work?</figcaption> <pre class="mermaid not-prose"> <p>flowchart LR Start —&gt; Stop</p> </pre> </figure> ``` There are several issues here: - Our code is wrapped in `<p>` tag, confusing the hell out of mermaid - The double dash `--` has been replaced with an em dash `—` which is not what mermaid expects - The `>` character has been replaced with `&gt;` which messes thing up even more What could have caused this? Markdown. When the MDX page is rendered, all that is not explicitly an MDX element, is processed by markdown. This includes everything wrapped in the `<Mermaid>` component. Markdown saw some text - it marked it as a paragraph, escaped the scary characters (`>`), and then _prettyfied_ it by consolidating the dashes. ## Solving the issue There are several ways to solve this issue: - Pass the code as a string to the component - deal with manually adding `\n` to simulate new lines as HTML doesn't support multiline arguments. - Load the diagrams as separate files using the `import` statement - don't have everything in one place. - Go the crazy route and pass a code block to the component 🤪 Of course I went for the last one. It might sound like a great idea, but depending on the way your setup renders the code blocks, it might be a bit of a pain to deal with. Let's try it: ```md --- title: Testing mermaid in Astro --- import Mermaid from "@components/markdown/Mermaid.astro"; <Mermaid title="Does it work?"> `` `mermaid flowchart LR Start --> Stop `` ` </Mermaid> ``` > ⚠️ Please remove the extra space from triple backticks - dev.to has really bad code block support that doesn't allow nesting. Consider reading this article on [my website](https://xkonti.tech/blog/astro-mermaid-mdx). [My blog](https://xkonti.tech/blog/astro-mermaid-mdx) uses [Expressive Code](https://expressive-code.com/)] to render the code blocks, and therefore the page's source code will look like this: ```html <figure> <figcaption>Does it work?</figcaption> <pre class="mermaid not-prose"> <div class="expressive-code"> <figure class="frame"> <figcaption class="header"></figcaption> <pre data-language="mermaid"> <code> <div class="ec-line"> <div class="code"> <span style="--0:#B392F0;--1:#24292E">flowchart LR</span> </div> </div> <div class="ec-line"> <div class="code"> <span class="indent"> <span style="--0:#B392F0;--1:#24292E"> </span> </span> <span style="--0:#B392F0;--1:#24292E">Start --&gt; Stop</span> </div> </div> </code> </pre> <div class="copy"> <button title="Copy to clipboard" data-copied="Copied!" data-code="flowchart LR Start --> Stop" > <div></div> </button> </div> </figure> </div> </pre> </figure> ``` Wow. This added a bit more markup to the page... but what's that? A `copy` button? How does that work? Take a look at it's markup: ```html <button title="Copy to clipboard" data-copied="Copied!" data-code="flowchart LR Start --> Stop" > <div></div> </button> ``` That's the whole source code of our diagram in a pleasant HTML argument string. It's easy to extract it and give it to mermaid on the client side. Let's modify our `Mermaid.astro` component to do exactly that! > **No copy button?** > If you're not using Expressive Code and your code blocks don't have the handy `copy` button, I included an alternative code snipped at the end of the article. ## Preparing the component First, let's rework the component's HTML markup. We'll wrap it in a `figure` element and place the code block indside a `details` element. This way we can hide the code block by default and show it only when the user clicks on the `Source` button. ```html ... <figure class="expandable-diagram"> <figcaption>{title}</figcaption> <div class="diagram-content">Loading diagram...</div> <details> <summary>Source</summary> <slot /> </details> </figure> ``` 1. The whole component is wrapped in a `figure` element with a `expandable-diagram` class. This way we can easily find all instances of the component using CSS selectors. 2. The `div.diagram-content` element is where the diagram will be rendered. 3. The source buggon needs to be clicked by the user to reveal the code block. 4. The `slot` element will be replaced with the code block rendered by Expressive Code. ## Extracting the source code Now let's rewrite our script to extract the code from the `copy` button and place it in the `.diagram-content` element: ```html ... <script> import mermaid from "mermaid"; // Postpone mermaid initialization mermaid.initialize({ startOnLoad: false }); function extractMermaidCode() { // Find all mermaid components const mermaidElements = document.querySelectorAll("figure.expandable-diagram"); mermaidElements.forEach((element) => { // Find the `copy` button for each component const copyButton = element.querySelector(".copy button"); // Extract the code from the `data-code` attribute let code = copyButton.dataset.code; // Replace the U+007f character with `\n` to simulate new lines code = code.replace(/\u007F/g, "\n"); // Construct the `pre` element for the diagram code const preElement = document.createElement("pre"); preElement.className = "mermaid not-prose"; preElement.innerHTML = code; // Find the diagram content container and override it's content const diagramContainer = element.querySelector(".diagram-content"); diagramContainer.replaceChild(preElement, diagramContainer.firstChild); }); } // Wait for the DOM to be fully loaded document.addEventListener("DOMContentLoaded", async () => { extractMermaidCode(); mermaid.initialize({ startOnLoad: true }); }); </script> ... ``` A lot is happening here, so let's break it down: 1. To prevent mermaid from processing the diagrams instantly, we need to postpone it's initialization. 2. We define the `extractMermaidCode` function to keep things somewhat organized. 3. The script will be executed only once per page, so we need to find all instances of our `Mermaid` component. This way we can process them all at once. 4. Once we're in our component, we can easily find the `copy` button as there's only one. 5. Extracting the code is relatively simple. 6. Of course there's one more catch. The `copy` button contains a `data-code` attribute with the new lines replaces with `U+007f` character. We need to replace it with `\n` for mermaid to understand it. 7. Now that we have the code, we can create a new `pre` element with `mermaid` class. This is what the mermaid library will look for to render the diagram from. 8. We can replace the existing diagram content (`Loading diagram...`) with the newly created `pre` element. 9. We register our own `DOMContentLoaded` event listener that will allow us to run code only once the page is fully loaded. 10. Finally, we call our `extractMermaidCode` function to prep the HTML for mermaid and render the diagrams. Phew! What was a lot of code, but it's not the worst. Let's save it and refgresh the page: ![Screenshot showing the diagram displaying properly](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0037mocngkrmgxl3hbi.png) That's much better! The only thing left is to modify it a bit to make it look better. This is after a light dressing up with Tailwind to fit this blog's theme: ![Screenshot of the final version of the component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a03fcbbgn5y9hl6sgtpz.png) ## In case you're not using Expressive Code If you're not using Expressive Code and your code blocks don't have the `copy` button, there's always a different way. I know it sounds crazy, but you could try to go over all the spans rendered by the code block and collect the characters from there. After some fiddling with ChatGPT, here's an example of this approach in action that worked for well me: ```html ... <script> import mermaid from "mermaid"; mermaid.initialize({ startOnLoad: false }); function extractAndCleanMermaidCode() { const mermaidElements = document.querySelectorAll("figure.expandable-diagram"); mermaidElements.forEach((element) => { // Find the code element within the complex structure const codeElement = element.querySelector( 'pre[data-language="mermaid"] code' ); if (!codeElement) return; // Extract the text content from each line div const codeLines = codeElement.querySelectorAll(".ec-line .code"); let cleanedCode = Array.from(codeLines) .map((line) => line.textContent.trim()) .join("\n"); // Remove any leading/trailing whitespace cleanedCode = cleanedCode.trim(); // Create a new pre element with just the cleaned code const newPreElement = document.createElement("pre"); newPreElement.className = "mermaid not-prose"; newPreElement.textContent = cleanedCode; // Find the diagram content container const diagramContentContainer = element.querySelector(".diagram-content"); // Replace existing diagram content child with the new pre element diagramContentContainer.replaceChild(newPreElement, diagramContentContainer.firstChild); }); } // Wait for the DOM to be fully loaded document.addEventListener("DOMContentLoaded", async () => { extractAndCleanMermaidCode(); mermaid.initialize({startOnLoad: true}); }); </script> ... ``` I hope this will help you out in marrying Astro with Mermaid.js.
xkonti
1,913,200
Unlocking the Power of Retrieval-Augmented Generation (RAG) as Learning Tools
We’ve all seen how ChatGPT brought a paradigm shift in the age of AI in a way we have never seen...
0
2024-07-05T21:03:51
https://dev.to/aisquare/unlocking-the-power-of-retrieval-augmented-generation-rag-as-learning-tools-3pn8
rag, ai, machinelearning, machinelearningtools
We’ve all seen how ChatGPT brought a paradigm shift in the age of AI in a way we have never seen before. Once the concept of LLMs was successfully implemented on such a large, commercial scale, it was only a given that the technology advancements in this sector would exponentially rise. One such sought after technology is RAG. [RAG, or Retrieval Augmented Generation](https://aws.amazon.com/what-is/retrieval-augmented-generation/), in the natural processing language (NLP) landscape has emerged as a fascinating concept that combines the strength of retrieval-based systems (a user query that yields documents as an output, or information obtained from external sources related to the query) and generative language models. It is an extension of the generative models providing answers not only contextually accurate but also information rich. (by outsourcing) ## UNDERSTANDING RAG The main working of a [RAG ](https://stackoverflow.blog/2023/10/18/retrieval-augmented-generation-keeping-llms-relevant-and-current/)can be described in 2 simple steps: 1. **Retrieving the information:** Imagine you went to the Library of Alexandria and wanted to find materials on a niche topic. The librarian is your retriever who scours the external databases, pulling in relevant books, context, snippets or documents. 2. **Generating text from this information:** Now a generator simply takes all this information to give out relevant, contextual and coherent responses. ## LLMs VS RAG LLMs are pre-trained AI models that can only provide or rather generate answers based on the existing database that they are ‘trained’ on. The quality of the answers they provide is then typically dependent on the quality/accuracy of data provided to them. RAGs are not so different from LLMs, in that they utilize them too, but after gathering relevant material from external data sources and then feeding this new data to an LLM like GPT to then generate a response. ## BENEFITS OF RAG Responses generated from large language models (LLMs) can pose several challenges, as they are often outdated and limited to a specific information base. This is because, as mentioned above, the answers provided are wholly dependent on the dataset they are trained on. Autonomous RAG reduces these redundancies by using the LLM retrieved content from pertinent content sources (open or close) and then produce a response. The retrieval-plus-generation process makes RAG systems shine in terms of accuracy, keeping us on the right track by reducing risk of incorrect/misleading content. These systems are particularly helpful in taming the ‘hallucinations’ that LLMs sometimes suffer from (providing plausible but fictional information). ## USER CENTRIC LEARNING — ENABLED WITH RAG User centricity refers to tailoring the product/material to each user, prioritizing the user as every one might have different needs and preferences. In the context of a learning environment, every user has a unique learning curve, requiring varying amounts of time to go through the same material. RAG comes handy here with its ability to provide tailored responses, be it in terms of gathering relevant materials or answering queries with exact information and not off-topic content(context aware generation). Because users are able to trace the origin of information to the many resources it has been gathered from, there is a certain amount of transparency and trust between the user and the AI model. We will explore more on this topic in the next heading. ##RAG SYSTEMS AS A LEARNING TOOL [AutoRAG](https://marker-inc-korea.github.io/AutoRAG/tutorial.html) being the new hot topic of the past few weeks has seen implementation in a lot of ways. One of them of course is as a learning tool. 1. **Customized Learning:** The thing about content is that in huge amounts, it becomes difficult to find a place to start. RAG systems simplify this by adapting the educational content to individual learner needs, providing personalized feedback and resources. This makes the learning model learner centric rather than the learner having to adapt to the system. It reduces the amount of effort needed to start learning something: which is the biggest pillar in learning. 2. **Ease of Access:** Access to educational content anytime, anywhere, facilitating continuous learning. RAG systems get data from diverse sources, offering exhaustive coverage of topics. You’re no longer limited by your researching/googling abilities, RAG brings it to you. 3. **Improved Knowledge Retention:** The content provided is engaging and interactive. RAG systems promote active learning and critical thinking. They give a focus to key concepts, reinforcing learning and improving retention. If we delve deeper into its use case in education, a few points that stand out are tutorial and homework assistance, curriculum development, and the biggest, language learning. To give a very simple example on language learning: **Scenario:** Inquiry about a certain topic/ phrase 1. **User query:** “I want to know the meaning of Die Daumen drücken in German” 2. **Retrieval:** RAG retrieves relevant information from sources like textbooks, novels, language forums 3. **Generation:** Now, using this content, it forms a response — “The meaning of Die Daumen drücken is ‘pressing the thumbs. It is an expression used to wish luck, translating to the English phrase ‘crossing fingers’, depicted by placing a finger across the one next to it.” 4. **User Benefit:** Gains an understanding of the idiomatic expressions, along with culturally relevant equivalents. ## FUTURE DIRECTIONS Autonomous RAG systems are an emerging and developing technology that can transform the education sector. Ongoing research will bring about technological advancements here as well. In terms of education, we can look forward to and explore its integration with other educational technologies, such as adaptive learning platforms and virtual classrooms. RAG systems can also support lifelong learning by providing resources and support for people of all ages and skills. By combining the strengths of retrieval-based and generation-based models, they offer personalized, and engaging learning experiences. ## INTEGRATION IN AISQUARE [AISquare](https://aisquare.com/) is an innovative platform designed to gamify the learning process for developers. Leveraging an advanced AI system, AISquare generates and provides access to millions, potentially billions, of questions across multiple domains. By incorporating elements of competition and skill recognition, AISquare not only makes learning engaging but also helps developers demonstrate their expertise in a measurable way. The platform is backed by the Dynamic Coalition on Gaming for Purpose ([DC-G4P](https://intgovforum.org/en/content/dynamic-coalition-on-gaming-for-purpose-dc-g4p)), affiliated with the UN’s Internet Governance Forum, which actively works on gamifying learning and exploring the potential uses of gaming across various sectors. Together, AISquare and DC-G4P are dedicated to creating games with a purpose, driving continuous growth and development in the tech industry. RAG comes with the ability to reshape and streamline the question retrieval and caching process. It helps in tailoring the learning process and making it more user-focused by getting questions based on the rating of the player and the level of difficulty being exuded in a game. Engagement with the learner is not only enhanced by RAG’s ability to retrieve relevant questions, but also by the gamified experience the platform offers. This improves the retention capacity, improving time taken to go through the material, and makes the learning process fun. You can reach us at [LinkedIn](https://www.linkedin.com/groups/14431174/), [X](https://x.com/AISquareAI), [Instagram](https://www.instagram.com/aisquarecommunity/), [Discord](https://discord.com/invite/8tJ3aCDYur). Author — Aadya Gupta
aisquare
1,913,194
🚀How to Use MediatR Pipeline Behavior in .NET 8
In modern software development, ensuring that our applications are maintainable, testable, and...
0
2024-07-05T20:49:55
https://dev.to/mahendraputra21/how-to-use-mediatr-pipeline-behavior-in-net-8-3f90
dotnet, csharp, learning, tutorial
In modern software development, ensuring that our applications are maintainable, testable, and scalable is crucial. One tool that has gained popularity for achieving these goals in .NET applications is MediatR. MediatR facilitates the implementation of the mediator pattern, which helps in reducing the direct dependencies between components. One of its powerful features is Pipeline Behavior, which allows you to add pre- and post-processing steps to your request handling. In this blog post, we'll explore what MediatR Pipeline Behavior is, why it's beneficial, how to implement it, its pros and cons, and conclude with some final thoughts. --- ## What is MediatR Pipeline Behavior? MediatR Pipeline Behavior is a feature that allows you to inject additional logic around the handling of requests. Think of it as an assembly line in a factory where each stage can add or modify the product being built. In the context of MediatR, this means you can add cross-cutting concerns like logging, validation, or performance monitoring before and after your request handler processes a request. --- ## Why We Need This In any robust application, there are several concerns that need to be addressed globally, such as logging, validation, exception handling, and more. Without a centralized way to handle these, you'd have to scatter this logic across various parts of your codebase, leading to duplication and maintenance challenges. MediatR Pipeline Behavior centralizes these concerns, ensuring they are handled consistently and making your code cleaner and easier to maintain. --- ## How to Implement MediatR Pipeline Behavior in .NET 8 Implementing MediatR Pipeline Behavior in .NET 8 is straightforward. Here’s a step-by-step guide to get you started: 🧑‍💻**Install MediatR and MediatR.Extensions.Microsoft.DependencyInjection:** First, you need to install the necessary NuGet packages. You can do this via the Package Manager Console or using the .NET CLI: ```sh dotnet add package MediatR dotnet add package MediatR.Extensions.Microsoft.DependencyInjection ``` 🧑‍💻**Create a Pipeline Behavior:** Create a class that implements `IPipelineBehavior<TRequest, TResponse>`. This interface has a single method Handle, which you can use to define your pre- and post-processing logic. ```csharp using MediatR; using System.Threading; using System.Threading.Tasks; public class LoggingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> { public async Task<TResponse> Handle(TRequest request, RequestHandlerDelegate<TResponse> next, CancellationToken cancellationToken) { // Pre-processing logic Console.WriteLine($"Handling {typeof(TRequest).Name}"); var response = await next(); // Post-processing logic Console.WriteLine($"Handled {typeof(TResponse).Name}"); return response; } } ``` 🧑‍💻**Register the Pipeline Behavior:** In your `Startup.cs` or `Program.cs` (depending on the template you are using), register your pipeline behavior with the DI container. ```csharp using MediatR; using Microsoft.Extensions.DependencyInjection; var services = new ServiceCollection(); services.AddMediatR(typeof(Program)); services.AddTransient(typeof(IPipelineBehavior<,>), typeof(LoggingBehavior<,>)); ``` 🧑‍💻**Create and Handle a Request:** Define a request and its handler to see the pipeline behavior in action. ```csharp using MediatR; using System.Threading; using System.Threading.Tasks; public class MyRequest : IRequest<MyResponse> { public string Message { get; set; } } public class MyResponse { public string ResponseMessage { get; set; } } public class MyRequestHandler : IRequestHandler<MyRequest, MyResponse> { public Task<MyResponse> Handle(MyRequest request, CancellationToken cancellationToken) { return Task.FromResult(new MyResponse { ResponseMessage = $"Hello, {request.Message}" }); } } ``` ## Pros and Cons of Using MediatR Pipeline Behavior **Pros:** 1. Centralized Cross-Cutting Concerns: It allows you to centralize logging, validation, and other concerns, making your code cleaner. 2. Consistent Application: Ensures that these concerns are applied consistently across all request handlers. 3. Maintainability: Reduces code duplication and makes your application easier to maintain. 4. Testability: Facilitates unit testing by isolating cross-cutting concerns. **Cons:** 1. Learning Curve: There is a bit of a learning curve to understand and implement MediatR and its pipeline behaviors. 2. Overhead: Adding multiple pipeline behaviors can introduce some performance overhead, although this is usually negligible. --- ## Conclusion MediatR Pipeline Behavior in .NET 8 offers a powerful way to handle cross-cutting concerns in a centralized and consistent manner. By following the steps outlined in this blog post, you can implement this feature in your applications, leading to cleaner, more maintainable, and more testable code. While there are some trade-offs to consider, the benefits often outweigh the drawbacks, making MediatR Pipeline Behavior a valuable tool in your .NET development toolkit.
mahendraputra21
1,913,198
shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.7
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the...
0
2024-07-05T20:46:55
https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-27-2mpl
javascript, opensource, nextjs, shadcnui
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI. In part 2.6, we looked at function getTsConfigAliasPrefix that returns the alias used in paths in your project’s ts-config.json. Let’s move on to the next line of code. ![](https://media.licdn.com/dms/image/D4E12AQF7xUAIIGQwVQ/article-inline_image-shrink_1500_2232/0/1720212190037?e=1725494400&v=beta&t=9yG8vfoP5NiM3EDRdjLqeUgL9KT9mp9oHhkTnfM5fmA) At L84, it is a simple check that returns null if any of the projectType or tailwindCssFile or tsConfigAliasPrefix does not exist. Let’s learn more about isTypescriptProject(cwd) ```js const isTsx = await isTypeScriptProject(cwd) ``` [isTypescriptProject](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L174) is a function imported from [ui/packages/cli/src/utils/get-project-info.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L174) and this function checks if the cwd (current working directory) has a tsconfig.json file. ```js export async function isTypeScriptProject(cwd: string) { // Check if cwd has a tsconfig.json file. return pathExists(path.resolve(cwd, "tsconfig.json")) } ``` pathExists ---------- pathExists is a function imported from [fs-extra](https://www.npmjs.com/package/fs-extra) ```js import fs, { pathExists } from "fs-extra" ``` Conclusion: ----------- To check if a project uses TypeScript, you could do the same thing that shadcn-ui/ui CLI package does. That is, check if tsconfig.json path exists in the given cwd using pathExists function provided by [fs-extra](https://www.npmjs.com/package/fs-extra). > _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/) About me: --------- Website: [https://ramunarasinga.com/](https://ramunarasinga.com/) Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/) Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga) Email: [[email protected]](mailto:[email protected]) [Build shadcn-ui/ui from scratch](https://tthroo.com/) References: ----------- 1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L84C3-L88C47](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L84C3-L88C47) 2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L174](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L174) 3. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L10](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L10) 4. [https://www.npmjs.com/package/fs-extra](https://www.npmjs.com/package/fs-extra)
ramunarasinga
1,913,197
Creating Virtual Machine Scale Set Using Azure Portal
Table of Contents Introduction Enter fullscreen mode Exit fullscreen mode ...
0
2024-07-05T20:45:39
https://dev.to/romanus_onyekwere/creating-virtual-machine-scale-set-using-azure-portal-1fp2
virtualmachine, scaleset, resources, networking
**Table of Contents** Introduction Step 1. Sign to Azure Account Step 2. Create a Virtual Machine Scale Set Step 3. Create a Virtual Machine Scale Set Step 4. Configure Basic Settings Introduction Creating a Virtual Machine Scale Set (VMSS) in Azure involves several steps. A VMSS allows you to deploy and manage a set of identical, auto-scaling virtual machines. Here's a step-by-step guide to create a VMSS using the Azure portal: Step 1 Sign to Azure Account (a) Go to Azure Portal (portal.azure.com) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0p34hly86tn34f1mfcy.png) (b) Sign in with your Azure Credentials Step 2 Create a Virtual Machine Scale Set (a) At the search bar, type **Vmss and select Virtual Machine Scale Set ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i8lg3fvcp2npbhrlw5s.png) (b) Click + Create to create **Virtual Machine Scale Set** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6q1gt1qkmihhlvih7j10.png) (c) Fill in the required rows from **Project details** which include; **Subscription;** Choose your Azure Subscription. **Resource Group;** Create a new name for the Resource Group. (vmss-RG) (d) At the **Scale set Details,** make sure you put the Virtual Machine scale set Name (hagital-vmss). Leave the region to (US) East us and Availability zone to None ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/okkyyzvhkzpwq05cxu7d.png) (e) In the **Orchestration** ,set orchestration mood to 'uniform' (f) In **Scaling**, set the scaling mood to ' Autoscaling ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6abcyp23gzntbvqj54db.png) ' In scaling configuration, click configure to review all scaling options. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syng4nzi7vd8x29a2dya.png) (g) Click on + add a scaling condition ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25webzpgy0adpd64qrqn.png) Another window will open at the right side corner. Continue with the prompt captioned **Add a scaling condition.** Leave the condition name to vmss-condition Leave the scale mode to autoscaling. Initial Instant count ........ 2 **Instance limit** Minimum ....... 2 Maximum ....... 4
romanus_onyekwere
1,913,196
Security news weekly round-up - 5th July 2024
Weekly review of top security news between June 28, 2024, and July 5, 2024
6,540
2024-07-05T20:44:31
https://dev.to/ziizium/security-news-weekly-round-up-5th-july-2024-kmd
security
--- title: Security news weekly round-up - 5th July 2024 published: true description: Weekly review of top security news between June 28, 2024, and July 5, 2024 tags: security cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jupjut8w3h9mjwm8m57.jpg series: Security news weekly round-up --- ## __Introduction__ Hello everyone, welcome to this edition of our security news weekly round-up here on DEV. As you prepare for the weekend (or if you've passed that, depending on your time zone), we'll review some security news that should know about. In summary, the articles that we'll cover are about the following: * Scams on Social Media * Spyware * Application Security * System Breach <hr/> ## [Hijacked: How hacked YouTube channels spread scams and malware](https://www.welivesecurity.com/en/scams/hijacked-hacked-youtube-channels-scams-malware/) It's not news and it's also news. I am saying this because the core behind these account takeovers is when you fall victim to phishing attacks. [In some cases, these might render your two-factor authentication useless](https://www.theverge.com/2023/3/24/23654996/linus-tech-tips-channel-hack-session-token-elon-musk-crypto-scam). Nonetheless, we all need reminders and education about these types of attacks. What's more, we should educate others not to click on links in the video description of a video that's offering a paid application for "free". Stay safe and read the article. The following excerpt should get you started. > ...it all starts with good ol’ phishing. Attackers create fake websites and send emails that look like they are from YouTube or Google and attempt to trick the targets into surrendering their “keys to the kingdom” ## [CapraRAT Spyware Disguised as Popular Apps Threatens Android Users](https://thehackernews.com/2024/07/caprarat-spyware-disguised-as-popular.html) Being alert of typosquatting can potentially save you from this malware because one of the applications that contains the malware is called TikToks. If you want to know what I mean, read the last word of the previous sentence carefully. Did you spot it? 😊 The article is quite detailed on the attack. Meanwhile, I will launch you on your reading journey using the excerpt below. > The campaign, dubbed CapraTube, was first outlined by the cybersecurity company in September 2023, with the hacking crew employing weaponized Android apps impersonating legitimate apps like YouTube to deliver a spyware called CapraRAT, a modified version of AndroRAT with capabilities to capture a wide range of sensitive data. ## [3 million iOS and macOS apps were exposed to potent supply-chain attacks](https://arstechnica.com/security/2024/07/3-million-ios-and-macos-apps-were-exposed-to-potent-supply-chain-attacks/) They have patched the flaw. However it begs the question, is any system safe? Or they are just waiting to be exploited by anyone who knows where to look and how to look? To make it more thought-provoking, they are three vulnerabilities. Do you want more mind-blowing facts? It all dates back to 2014. Take your reading inspiration from the following excerpt, then read the entire article. You'll learn something new. > The three vulnerabilities EVA discovered stem from an insecure verification email mechanism used to authenticate developers of individual pods. The developer entered the email address associated with their pod. The trunk server responded by sending a link to the address. When a person clicked on the link, they gained access to the account. ## [Hacker Stole Secrets From OpenAI](https://www.securityweek.com/hackers-stole-secrets-from-openai/) I'll guess that when you think of OpenAI, your subconscious will also mention ChatGPT 😊. Based on the article, nothing that sensitive was taken. Therefore, OpenAI did not report to an agency like the FBI. This article made the final edit before publishing because ChatGPT is that popular and you deserve to know when stuff like this happens to the maker, OpenAI. Here is an excerpt from the linked article above: > After the breach, Leopold Aschenbrenner, an OpenAI technical program manager, focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets. ## __Credits__ Cover photo by [Debby Hudson on Unsplash](https://unsplash.com/@hudsoncrafted). <hr> That's it for this week, and I'll see you next time.
ziizium
1,913,195
How to Implement a Tree of Thoughts in Python
This blog post walks you through the implementation of a Tree of Thoughts (ToT) prompting technique in Python, a powerful tool for systematically exploring and evolving ideas. Using Anthropic's Claude Sonnet 3.5 language model, we build a hierarchical structure where each node represents a thought, and the tree expands by generating new thoughts. Whether you're looking to brainstorm, problem-solve, or innovate, this post guides you through setting up and utilizing a Tree of Thoughts in your projects.
0
2024-07-05T20:36:00
https://dev.to/stephenc222/how-to-implement-a-tree-of-thoughts-in-python-4jmc
treeofthought, ai, python, anthropic
--- title: How to Implement a Tree of Thoughts in Python published: true description: This blog post walks you through the implementation of a Tree of Thoughts (ToT) prompting technique in Python, a powerful tool for systematically exploring and evolving ideas. Using Anthropic's Claude Sonnet 3.5 language model, we build a hierarchical structure where each node represents a thought, and the tree expands by generating new thoughts. Whether you're looking to brainstorm, problem-solve, or innovate, this post guides you through setting up and utilizing a Tree of Thoughts in your projects. tags: - TreeofThought - AI - Python - Anthropic # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. published_at: 2024-07-05 20:36 +0000 --- ![Robot sitting on a tree limb](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f58uiniwpolxysezd5ef.jpg) The concept of the "Tree of Thoughts" (ToT) is a powerful tool for allowing an LLM to explore a non-linear path through a problem space, particularly for exploring and evolving ideas systematically. This blog post tutorial walks you through the details of implementing a Tree of Thoughts (ToT) prompting technique in Python, leveraging Anthropic's Claude Sonnet 3.5 language model to generate and expand thoughts. The following example provides a practical illustration of how to build and utilize a ToT to ideate solutions efficiently. All code for this tutorial can be found in my [GitHub repository](https://github.com/stephenc222/example-tree-of-thoughts-prompting). ## What is a Tree of Thoughts? A Tree of Thoughts is a hierarchical structure where each node represents a distinct thought or idea. The tree grows by expanding nodes with additional thoughts, creating branches that represent different avenues of exploration. This approach allows for a structured and systematic exploration of ideas, helping to uncover innovative solutions and insights. ## Implementation Overview We'll walk through a Python implementation of a Tree of Thoughts, explaining each component and how it interacts with the Anthropic Claude Sonnet 3.5 API to generate and evolve thoughts. ### Setting Up the Environment First, ensure you have the necessary libraries installed. You'll need `anthropic` and `python-dotenv`. Install these using pip if you haven't already: ```sh pip install anthropic python-dotenv ``` ### Importing Libraries and Setting Up Anthropic Ensure you have a `.env` file with your Anthropic API key, such as the following: ```bash ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY ``` The `AnthropicService` class that uses your Anthropic API key is a wrapper around the Anthropic API that provides a simple interface for interacting with the API. In our `ai.py` file, we define the `AnthropicService` class: ```python import os from dotenv import load_dotenv from anthropic import Anthropic # Load environment variables from .env file load_dotenv() MAX_TOKENS = 1024 DEFAULT_TEMPERATURE = 0 DEFAULT_ANTHROPIC_MODEL_NAME = "claude-3-5-sonnet-20240620" class AnthropicService: def __init__(self, model_name: str = None, anthropic: Anthropic = None): if not os.environ.get("ANTHROPIC_API_KEY"): raise ValueError("No valid API key found for Anthropic.") self.client = anthropic or Anthropic( api_key=os.environ["ANTHROPIC_API_KEY"]) self.model_name = model_name or DEFAULT_ANTHROPIC_MODEL_NAME def generate_response(self, prompt: str, max_tokens: int = MAX_TOKENS, temperature: float = DEFAULT_TEMPERATURE) -> str: msg = self.client.messages.create( model=self.model_name, max_tokens=max_tokens, temperature=temperature, messages=[ { "role": "user", "content": prompt, }, ], ) return msg.content[0].text ``` Import the `AnthropicService` class into `app.py` to use it: ```python from ai import AnthropicService ``` ### Defining the ThoughtNode Class The `ThoughtNode` class represents a node in the Tree of Thoughts. Each node contains a thought and a list of child nodes. ```python class ThoughtNode: def __init__(self, thought, children=None): self.thought = thought self.children = children or [] ``` ### Building the TreeOfThought Class The `TreeOfThought` class orchestrates the process of generating and evolving thoughts. It initializes with a root prompt and interacts with the AI service to expand the tree iteratively. ```python class TreeOfThought: def __init__(self, root_prompt, ai_service=None, max_iterations=3, max_tokens=250): self.root = ThoughtNode(root_prompt) self.max_iterations = max_iterations self.ai_service = ai_service or AnthropicService() self.current_thoughts = [self.root] self.max_tokens = max_tokens ``` ### Calling the Language Model The `call_llm` method sends a prompt to the AI service and returns the response. This method handles any errors that may occur during the API call. ```python def call_llm(self, prompt): try: response = self.ai_service.generate_response( prompt, max_tokens=self.max_tokens, ) return response except Exception as e: print(f"Error calling LLM: {e}") return [] ``` ### Exploring and Expanding Thoughts The `explore_thoughts` method generates new thoughts based on the current thoughts in the tree. It prompts the AI to provide two next thoughts for each current thought, creating new nodes and expanding the tree. ```python def explore_thoughts(self, thought_nodes): new_thought_nodes = [] for thought_node in thought_nodes: prompt = f"Given the current thought: '{thought_node.thought}', provide two concise next thoughts that evolve this idea further." response = self.call_llm(prompt) if response: new_thought_node = ThoughtNode(response) thought_node.children.append(new_thought_node) new_thought_nodes.append(new_thought_node) return new_thought_nodes ``` ### Running the Tree of Thoughts The `run` method orchestrates the iterative process of expanding the tree. It continues exploring thoughts until the maximum number of iterations is reached. ```python def run(self): iteration = 0 while self.current_thoughts and iteration < self.max_iterations: print(f"Iteration {iteration + 1}:") self.current_thoughts = self.explore_thoughts( self.current_thoughts) for thought_node in self.current_thoughts: print(f"Explored Thought: {thought_node.thought}") iteration += 1 ``` ### Updating and Printing the Tree of Thoughts The `update_starting_thought` method allows for changing the root thought of the tree. The `print_tree` method provides a visual representation of the entire tree, showing the hierarchy of thoughts. ```python def update_starting_thought(self, new_thought): self.root = ThoughtNode(new_thought) self.current_thoughts = [self.root] def print_tree(self, node, level=0): indent = ' ' * (level * 2) thought_lines = node.thought.split('\n') for idx, line in enumerate(thought_lines): if idx == 0: print(f"{indent}- {line}") else: print(f"{indent} {line}") for child in node.children: self.print_tree(child, level + 1) ``` ### Execution Finally, we set the starting prompt and run the Tree of Thoughts, printing the final structure. ```python if __name__ == "__main__": starting_prompt = "Think of a solution to reduce the operational costs of your business." tot = TreeOfThought(starting_prompt) tot.run() print("=" * 100) print("Final Tree of Thoughts:") tot.print_tree(tot.root) ``` You should see output of the tree of thoughts like this: ```txt Final Tree of Thoughts: - Think of a solution to reduce the operational costs of your business. - Here are two concise next thoughts that evolve the idea of reducing operational costs: 1. Analyze energy consumption and implement efficiency measures. 2. Explore automation options for repetitive tasks to reduce labor costs. - Here are two concise next thoughts that further evolve the idea of reducing operational costs: 1. Implement a lean inventory management system to minimize holding costs and waste. 2. Negotiate better terms with suppliers and consider strategic partnerships for bulk purchasing discounts. - Here are two concise next thoughts that further evolve the idea of reducing operational costs: 1. Adopt energy-efficient technologies and practices to lower utility expenses and reduce environmental impact. 2. Invest in automation and AI-driven processes to streamline operations and reduce labor costs in the long term. ``` ## Conclusion This Python implementation of a Tree of Thoughts prompting technique demonstrates how to systematically explore and evolve ideas using Anthropic's Claude Sonnet 3.5 language model. By structuring thoughts in a hierarchical tree, you can uncover innovative solutions and insights efficiently. This approach is particularly valuable for brainstorming, problem-solving, and any scenario where exploring multiple avenues of thought is beneficial. Whether you're working on business strategies, creative writing, research, or any other problem involving non-linear thinking, the Tree of Thoughts (ToT) prompting technique offers a structured and powerful method for ideation and exploration.
stephenc222
1,913,193
Outlier Detection in Election Data Using Geospatial Analysis - AKWA IBOM
Introduction The aim of this project is to uncover potential election irregularities to...
0
2024-07-05T20:33:21
https://dev.to/mwangcmn/outlier-detection-in-election-data-using-geospatial-analysis-akwa-ibom-3b06
# Introduction The aim of this project is to uncover potential election irregularities to enable the electoral commission to ensure transparency of election results. In this project , I will identify outlier polling units where the voting results deviate significantly from neighbouring units. ## Data Understanding The dataset used in this analysis, represents polling units in the state of Akwa Ibom only.The data used can be found [here](https://drive.google.com/file/d/1dUewV7fM1TJA1XeaZCMWaYQ98Rxlb7CQ/view?usp=sharing). I conducted this analysis in Python as follows ``` from google.colab import drive, files drive.mount('/content/drive') #Import Libraries import pandas as pd from geopy.geocoders import OpenCage #path = '/content/drive/MyDrive/Colab Notebooks/Nigeria_Elections/' data = pd.read_csv(path + "AKWA_IBOM_crosschecked.csv") ``` Here is a summary about columns in the data set 1. **State**: The name of the Nigerian state where the election took place (e.g., “AKWA IBOM”). 2. **LGA** (Local Government Area): The specific local government area within the state (e.g., “ABAK”). 3. **Ward**: The electoral ward within the local government area (e.g., “ABAK URBAN 1”). 4. **PU-Code** (Polling Unit Code): A unique identifier for the polling unit (e.g., “3/1/2001 0:00”). 5. **PU-Name** (Polling Unit Name): The name or location of the polling unit (e.g., “VILLAGE SQUARE, IKOT AKWA EBOM” or “PRY SCH, IKOT OKU UBARA”). 6. **Accredited Voters**: The number of voters accredited to participate in the election at that polling unit. 7. **Registered Voters**: The total number of registered voters in that polling unit. 8. **Results Found**: Indicates whether results were found for this polling unit (usually TRUE or FALSE). 9. **Transcription Count**: The count of how many times the results were transcribed (may be -1 if not applicable). 10. **Result Sheet Stamped**: Indicates whether the result sheet was stamped (TRUE or FALSE). 11. **Result Sheet Corrected**: Indicates whether any corrections were made to the result sheet (TRUE or FALSE). 12. **Result Sheet Invalid**: Indicates whether the result sheet was deemed invalid (TRUE or FALSE). 13. **Result Sheet Unclear**: Indicates whether the result sheet was unclear (TRUE or FALSE). 14. Result Sheet Unsigned: Indicates whether the result sheet was unsigned (TRUE or FALSE). 15. APC: The number of votes received by the All Progressives Congress (APC) party. 16. LP: The number of votes received by the Labour Party (LP). 17. PDP: The number of votes received by the People’s Democratic Party (PDP). 18. NNPP: The number of votes received by the New Nigeria People’s Party (NNPP). I then created the Address column by concatenating the Polling unit Name, Ward, the Local government Area and State, which will be useful during geocoding: ``` data['Address'] = data['PU-Name'] + ',' + data['Ward'] + ',' + data['LGA'] + ',' + data['State'] ``` To obtain the Latitude and Longitude columns, I utilized geospatial encoding techiniques. I generated an API key on OpenCage Geocoding API, and defined a function geocode_address to geocode our new Address column to obtain the Latitude and Longitude columns ``` def geocode_address(Address): try: location = geolocator.geocode(Address) return location.latitude, location.longitude except: return None, None data[['Latitude', 'Longitude']] = data['Address'].apply(lambda x: pd.Series(geocode_address(x))) ``` A quick at our dataset: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3g0kinf7xbb6at2jz4z3.PNG) Looks like our function works and I was able to obtain the Latitude and Longitude column. As there are still null values in these 2 columns, I will Impute them using the Simple Imputer, which will replace the missing values with the mean. ``` from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy = 'mean') data[['Latitude', 'Longitude']] = imputer.fit_transform(data[['Latitude', 'Longitude']]) data.to_csv('AKWA_IBOM_geocode.csv', index = False) ``` ### Identifying Neighbours I defined a radius of 1 km to identify which polling units are considered neighbours ``` #Calculate distance and find neighbours from geopy.distance import geodesic neighbours= {} def neighbouring_pu(data, radius = 1.0): for i, row in data.iterrows(): neighbours[i] = [] for j, row2 in data.iterrows(): if i != j: distance = geodesic((row['Latitude'],row['Longitude']), (row2['Latitude'],row2['Longitude'])).km if distance <= radius: neighbours[i].append(j) return neighbours neighbours = neighbouring_pu(data, radius =1.0) ``` **Outlier Calculation - Score** I will define a function, get_outlier_scores, that calculates the outlier scores for voting data in this dataset. It does so by comparing the votes each row received for various parties (APC, LP, PDP, NNPP) to the average votes received by its neighboring rows, which are specified in a dictionary, neighbours. For each row, the function computes the absolute difference between the votes in that row and the average votes of its neighbors for each party, and stores these differences as outlier scores. Finally, it returns a new DataFrame that combines the original voting data with the calculated outlier scores. This allows for the identification of rows with voting patterns that significantly differ from their neighbors. ``` def get_outlier_scores(data, neighbours): outlier_scores = [] parties = ['APC', 'LP', 'PDP', 'NNPP'] for i, row in data.iterrows(): scores = {} for party in parties: votes = row[party] neighbour_votes = data.loc[neighbours[i], party].mean() if neighbours[i] else 0 scores[party + '_outlier_score'] = abs(votes - neighbour_votes) outlier_scores.append(scores) outlier_scores_data = pd.DataFrame(outlier_scores) return pd.concat([data, outlier_scores_data], axis = 1) outlier_scores_df = get_outlier_scores(data, neighbours) ``` **Sorting and Reporting** I sorted the data by the outlier scores for each party and obtained the following detailed report that includes the top five outliers for each party, with the 'PU-Code', number of votes, and the outlier score. ####: All Progressives Congress (APC) party | PU-Code | APC | APC_outlier_score | |:-------------|------:|--------------------:| | 03-05-11-009 | 324 | 228.52 | | 03-29-05-013 | 194 | 167.334 | | 03-30-07-001 | 180 | 153.325 | | 03-05-09-014 | 194 | 152.149 | | 03-28-05-003 | 180 | 138.132 | ####: Labour Party (LP) | PU-Code | LP | LP_outlier_score | |:-------------|-----:|-------------------:| | 03-05-11-009 | 59 | 45.451 | | 03-29-05-013 | 42 | 6.65894 | | 03-30-07-001 | 29 | 6.34942 | | 03-05-09-014 | 3 | 26.5831 | | 03-28-05-003 | 91 | 61.5261 | ####: People’s Democratic Party (PDP) | PU-Code | PDP | PDP_outlier_score | |:-------------|------:|--------------------:| | 03-05-11-009 | 7 | 27.3627 | | 03-29-05-013 | 181 | 145.232 | | 03-30-07-001 | 17 | 18.8739 | | 03-05-09-014 | 36 | 24.2221 | | 03-28-05-003 | 12 | 48.2519 | ####: New Nigeria People’s Party - NNPP | PU-Code | NNPP | NNPP_outlier_score | |:-------------|-------:|---------------------:| | 03-05-11-009 | 0 | 0.27451 | | 03-29-05-013 | 6 | 4.14865 | | 03-30-07-001 | 0 | 1.85521 | | 03-05-09-014 | 0 | 2.36104 | | 03-28-05-003 | 0 | 2.36104 | **Visualize the neighbours** Generate scatterplots to visualize the geographical distribution of polling units based on their outlier scores for four political parties (APC, LP, PDP, NNPP). Each point represents a polling unit plotted by its latitude and longitude. Each plot provides a clear visual representation of how the outlier scores are geographically distributed, making it easier to identify patterns or anomalies in the data. ``` import matplotlib.pyplot as plt import seaborn as sns parties = ['APC', 'LP', 'PDP', 'NNPP'] for party in parties: plt.figure(figsize=(10, 6)) sns.scatterplot(data=outlier_scores_df, x='Latitude', y='Longitude', hue=party + '_outlier_score', palette='viridis') plt.title(f'Polling Units by {party} Outlier Score') plt.xlabel('Latitude') plt.ylabel('Longitude') plt.legend(title=party + ' Outlier Score') plt.savefig(f'polling_units_{party}_outlier_score.png') plt.show() ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9f1a93wtcrblr4h3sp9.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kymfzldnuok135p6u92e.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gy14cv4dpuphgmui78bm.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgjm2yljnnuvfgjv6k97.png) ## Deliverables 1. Find the full Notebook [here](https://github.com/mwang-cmn/Outlier-Detection---Geospatial-Analysis-of-Election-Data/blob/main/Nigerian_Elections_Outlier_Detection.ipynb) 2. Full [Report](https://github.com/mwang-cmn/Outlier-Detection---Geospatial-Analysis-of-Election-Data/blob/main/README.md) - Top five outliers for each party. 3. File with Latitude and Longitude - [CSV](https://drive.google.com/file/d/1rxI3TK_MupPdkgvb0BYtC8Hwux3nhGoo/view?usp=sharing) 4. File with sorted polling units by outlier scores - [CSV](https://drive.google.com/file/d/153MFcDuYsST1Q-69HQuenVJw-CqqTKQ9/view?usp=sharing)
mwangcmn
1,913,191
Exploring DevOps: A New Adventure
Introduction Hey everyone! As you know, I've always been a data guy, diving deep into the...
0
2024-07-05T20:31:36
https://dev.to/mrsaadfazal/exploring-devops-a-new-adventure-21ke
devops, aws
## Introduction Hey everyone! As you know, I've always been a data guy, diving deep into the realms of data science. But recently, I've decided to give DevOps a try. Why, you ask? Well, thanks to the amazing DevOps community and a special shoutout to "kubeden" for enlightening me on how fascinating this field can be, I thought, why not explore it? So, I decided to take a detour from my data journey and spend some time in the DevOps world. ## My Background I’ve been a developer for three years now, with some experience in cloud computing and AWS. So, I figured learning DevOps might be a bit easier given my background. Here’s a rundown of what I’ve learned in the past four days during my free time: ## AWS ### IAM 1. **Users**: Managing user accounts and permissions. 2. **Groups**: Organizing users into groups for easier management. 3. **Policies (Permissions)**: Defining and assigning permissions to users and groups. ### Amazon Elastic Container Service (ECS) and ECR 1. **Deploy Docker Container**: - **Create Cluster**: Setting up a new ECS cluster. - **Service API**: - **Tasks**: Running individual containers. - **Load Balancer**: Distributing traffic among containers. - **Health Checker**: Monitoring container health. ### Elastic Beanstalk Deploying and managing applications without worrying about the underlying infrastructure. ## Docker ### Basics 1. **Installation of Docker CLI and Desktop**: Getting Docker up and running on my machine. 2. **Understanding Images vs. Containers**: Learning the difference between Docker images and containers. 3. **Running Ubuntu Image in Container**: Starting a container with Ubuntu. 4. **Multiple Containers**: Managing multiple containers simultaneously. 5. **Port Mappings**: Mapping container ports to host ports. 6. **Environment Variables**: Setting environment variables for containers. 7. **Dockerization of Node.js Application**: - **Dockerfile**: Creating a Dockerfile for a Node.js app. - **Caching Layers**: Using caching to speed up builds. - **Publishing to Hub**: Pushing images to Docker Hub. ### Advanced 1. **Docker Compose**: - **Services**: Defining multi-container applications. - **Port Mapping**: Configuring port mappings for services. - **Env Variables**: Setting environment variables for services. 2. **Docker Networking**: - **Bridge**: Default network driver. - **Host**: Using the host’s networking stack. 3. **Volume Mounting**: Persisting data using volumes. 4. **Efficient Caching in Layers**: Optimizing Dockerfile for caching. 5. **Docker Multi-Stage Builds**: Using multi-stage builds to reduce image size. ## Nginx ### Setting Up 1. **Launching an EC2 Instance**: - **Create and configure a virtual machine using EC2**: Choosing an instance type and region. - **Assign a static IP**: Ensuring consistent access. - **Set up security groups**: Allowing HTTP and HTTPS traffic. ### Configuration 1. **Accessing the EC2 Instance**: Connecting via SSH. 2. **Updating and Installing Necessary Packages**: Keeping everything up-to-date. 3. **Cloning the Project Repository**: Downloading my Node.js app. 4. **Installing Project Dependencies**: Using npm install. 5. **Running the Node.js Application**: Managing with pm2. 6. **Setting Up a Domain**: Registering and pointing a domain to my Elastic IP. 7. **Configuring Nginx**: Proxying requests to the Node.js app. 8. **Setting Up SSL with Let's Encrypt**: Using Certbot for SSL certificates. ## Kafka ### Key Concepts 1. **High Throughput and Less Storage**: Optimized for large data streams. 2. **Components**: - **Producers and Consumers**: Sending and receiving messages. - **Topics and Partitions**: Organizing messages. - **Consumer Groups**: Managing multiple consumers. ### Models 1. **Queue and Pub/Sub**: Handling different messaging patterns. 2. **Zookeeper**: Managing Kafka infrastructure. 3. **Admin, Producers, and Consumers**: Setting up and using Kafka. ## Serverless ### Overview 1. **No Server Management**: Focusing on code, not servers. 2. **Event-Driven Execution**: Functions triggered by events. 3. **Automatic Scaling**: Scaling based on load. 4. **Pay-per-Invocation**: Billing based on function usage. ### Practical Example 1. **Creating a Lambda Function**: Deploying a function to AWS Lambda. 2. **Trigger Setup**: Using API Gateway to invoke the function. 3. **Testing**: Verifying with a browser and Postman. ## What's Next: My Learning Plan for the Next 4 Days In the next four days, I plan to dive deeper into the following areas: 1. **More AWS Services**: Expanding my knowledge of various AWS services beyond the basics. 2. **Azure**: Getting familiar with Microsoft's cloud platform and its unique features. 3. **Terraform**: Learning infrastructure as code to manage cloud resources efficiently. 4. **Ansible**: Exploring configuration management and automation. 5. **CI/CD**: Strengthening my understanding of continuous integration and continuous deployment practices. 5. **GitHub Workflows**: Refining my skills in creating and managing workflows on GitHub. ## Conclusion So in these days, my free time goes to learning about DevOps and I will be sharing more about what I have learned, and in a new post, I will share the resources too. For the DevOps community: Do let me know your thoughts and what should I need to put more focus on in this DevOps realm? Stay curious, keep learning, and happy coding!
mrsaadfazal
1,913,186
Test--Linux User Creation Bash Script
Using Bash Scripts to Automate User Management in Linux In environments with multiple users and...
0
2024-07-05T20:24:35
https://dev.to/orire/test-linux-user-creation-bash-script-13mj
**Using Bash Scripts to Automate User Management in Linux** In environments with multiple users and complex access requirements, managing user accounts on a Linux system can be a time-consuming task. Scripting automation improves security, preserves uniformity across user configurations, and streamlines this procedure. This article will examine a Bash script that can be used to automate tasks related to user management on a Linux system, with a focus on the script's features, organization, and advantages. ### Overview of the Script The script (`create_users.sh`) is designed to automate several key aspects of user management: 1. **Initialization and Setup** ```bash # Define the log and password file path LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Ensure necessary directories exist and set permissions sudo mkdir -p /var/log /var/secure sudo touch $LOG_FILE $PASSWORD_FILE sudo chmod 600 $LOG_FILE $PASSWORD_FILE ``` - **Purpose**: Initializes paths for logging user management activities (`LOG_FILE`) and storing generated passwords (`PASSWORD_FILE`). - **Setup**: Creates required directories (`/var/log` and `/var/secure`) if they do not exist and sets strict permissions to protect sensitive information. 2. **Input File Validation** ```bash # Check if the input file is provided if [ -z "$1" ]; then echo "Error: Please provide a text file containing user data as an argument." exit 1 fi ``` - **Purpose**: Ensures the script is executed with an input file (`user.txt`) containing user data. - **Error Handling**: Exits gracefully if no input file is provided, preventing execution without necessary data. 3. **User and Group Management** ```bash # Read the input file line by line while IFS= read -r line; do # Skip empty lines [ -z "$line" ] && continue # Extract username and groups IFS=';' read -r username groups <<< "$line" username=$(echo $username | xargs) # Trim whitespace groups=$(echo $groups | xargs) # Trim whitespace # Create user's personal group if not exists if ! getent group "$username" > /dev/null; then sudo groupadd "$username" echo "$(date): Created group $username" >> $LOG_FILE fi # Create user if not exists if ! id -u "$username" > /dev/null 2>&1; then sudo useradd -m -g "$username" "$username" echo "$(date): Created user $username" >> $LOG_FILE fi # Add user to specified groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo $group | xargs) # Trim whitespace if ! getent group "$group" > /dev/null; then sudo groupadd "$group" echo "$(date): Created group $group" >> $LOG_FILE fi sudo usermod -aG "$group" "$username" echo "$(date): Added $username to group $group" >> $LOG_FILE done done < "$1" ``` - **Purpose**: Iterates through each line of the input file (`user.txt`), extracts usernames and group memberships, creates users and groups if they do not exist, and assigns users to specified groups. - **Flexibility**: Supports multiple group memberships per user, ensuring adaptable user management. 4. **Password Management** ```bash # Generate random password password=$(/usr/bin/openssl rand -base64 12) echo "$username,$password" >> $PASSWORD_FILE # Set user's password echo "$username:$password" | sudo chpasswd echo "$(date): Set password for $username" >> $LOG_FILE ``` - **Purpose**: Generates a random password securely using OpenSSL, logs it along with the username in `PASSWORD_FILE`, and sets the password using `chpasswd`. - **Security**: Ensures passwords are randomly generated and securely stored, minimizing vulnerabilities. 5. **Permissions and Logging** ```bash # Set permissions and ownership for home directory sudo chown -R "$username:$username" "/home/$username" sudo chmod 700 "/home/$username" echo "$(date): Set permissions for /home/$username" >> $LOG_FILE ``` - **Purpose**: Sets appropriate permissions (`chmod`) and ownership (`chown`) for each user’s home directory to maintain security and privacy. - **Logging**: Records all actions (user creation, group management, password setting) in `$LOG_FILE`, providing an audit trail for administrators. ### Conclusion Linux environments can benefit greatly from the efficiency, consistency, and security that come with automating user management tasks with scripts such as `create_users.sh}. Automating repetitive tasks allows system administrators to concentrate on more strategic aspects of system management and guarantee that security best practices are followed. Platforms such as [HNG Tech](https://hng.tech/internship) provide opportunities for people who are interested in learning more about automation and system administration to work on real-world projects and challenges, improving their skills in Linux administration and other areas. **Learn more about HNG Internship:** - [HNG Internship Overview](https://hng.tech/internship) - [HNG Premium](https://hng.tech/premium) System administrators can enhance workflows, increase operational effectiveness, and contribute to a more secure computing environment by utilizing automation tools effectively. This article gives administrators the fundamental knowledge they need to comprehend and apply automated user management in Linux, enabling them to improve system security and expedite operations.
orire
1,913,146
Angular Tutorial: Host Element Binding
In the newest versions of Angular, the @HostBinding and @HostListener decorators are no longer...
0
2024-07-05T20:21:14
https://briantree.se/angular-tutorial-host-element-binding/
angular, angulardevelopers, tutorial, webdev
In the newest versions of Angular, the [@HostBinding](https://angular.dev/guide/components/host-elements#the-hostbinding-and-hostlistener-decorators) and [@HostListener](https://angular.dev/guide/components/host-elements#the-hostbinding-and-hostlistener-decorators) decorators are no longer intended for use. As the new [documentation](https://angular.dev/guide/components/host-elements/) states, they... > “exist exclusively for backwards compatibility”. There’s a new, more streamlined way to do this type of thing, and if you’ve worked with Angular in the past, it may look familiar to you. It’s kind of an old concept made new again. In this video we’ll look at a couple of examples I created for past videos about these decorators, and we’ll replace them with the newer methods. Also, we’ll update a few other concepts in these components and directives along the way too. Ok, let’s start with a @HostBinding example. {% embed https://www.youtube.com/embed/hfu0edMz_fk %} ## Using Host Element Class Binding to Replace the @HostBinding Decorator We have [this application](https://stackblitz.com/edit/stackblitz-starters-ef6phx?file=src%2Fform%2Fform.component.ts) that was originally created to [demonstrate different ways to bind classes on elements](https://youtu.be/IBuZv_WmyrE). <div> <img src="https://briantree.se//assets/img/content/uploads/2024/07-05/demo-1.png" alt="Example of a simple email subscription form component with" width="644" height="326" style="width: 100%; height: auto;"> </div> For one of the examples in this video, I used the @HostBinding decorator to conditionally bind a class on the host element of this form component when the email field status changes from invalid to valid. <div> <img src="https://briantree.se/assets/img/content/uploads/2024/07-05/demo-2.png" alt="Example of a component class added with the Angular @HostBinding decorator" width="790" height="230" style="width: 100%; height: auto;"> </div> So, If I add a valid email here, you can see that the style changes. <div> <img src="https://briantree.se/assets/img/content/uploads/2024/07-05/demo-3.png" alt="Example of several styles changing when a form's validity changes by toggling a class with the Angular @HostBinding decorator" width="668" height="388" style="width: 100%; height: auto;"> </div> Now part of this is the host bound class changing, and the rest is a class that’s toggled on the body. All of this happens in an observable subscription to the email control’s status change. <div> <img src="https://briantree.se/assets/img/content/uploads/2024/07-05/demo-4.png" alt="Example of several styles changing within a reactive form control value change observable subscription" width="842" height="386" style="width: 100%; height: auto;"> </div> So, let’s modernize all of this. First, we can remove the @HostBinding decorator and we can remove its import too since we won’t need it anymore. Ok, what we'll use now is something that existed back in the day, after we switched from [AngularJS](https://angularjs.org/) to Angular 2 or modern Angular. We'll use the old/new host property on the component decorator. In this property, we can bind classes just as we would in the template. Since we’re binding a “valid” class, we’ll use square brackets to bind to the class attribute. And then this class will be bound to our “isValid” property. #### form.component.ts ```typescript @Component({ selector: 'app-form', ... host: { '[class.valid]': 'isValid' } }) export class FormComponent { private isValid = false; ... } ``` Ok, at this point, what we have now is equivalent to what we had before we removed the decorator, but there’s still more we can do here. ### How to Convert Form Value Observable to a Signal One thing we can do is use [signals](https://angular.dev/guide/signals) to bind directly to the email control value status with the new [toSignal()](https://angular.dev/api/core/rxjs-interop/toSignal) function. This function will convert an observable to a signal. So, we need to pass it the control status changes observable. Then we’ll add a [pipe](https://rxjs.dev/api/index/function/pipe), and we’ll [map](https://rxjs.dev/api/operators/map) the status so that we can return a Boolean value based on whether the control status is valid or not. ```typescript private isValid = toSignal(this.emailControl.statusChanges .pipe(map(status => { return status === 'VALID'; }))); ``` So now this “isValid” property is a signal that will automatically update when status of the control changes. This means that we’ll need to add parenthesis to the property in our class binding. #### Before: ```typescript host: { '[class.valid]': 'isValid' } ``` #### After: ```typescript host: { '[class.valid]': 'isValid()' } ``` Ok, now that the status change has been converted to a signal, we can actually use the new [effect()](https://angular.dev/guide/signals#effects) function to toggle the valid class on the body instead of the subscription. ### How to Use an effect() to Toggle a Class When a Form Field Status Changes To do this, let’s add the effect() function within the constructor. Then we can just copy the code that toggles the class currently and paste it into the effect(). Then we just need to change this condition to instead use the “isValid” signal. Now this will execute every time the “isValid” signal value changes, so we won’t need the old subscription. We won’t need the OnInit() method anymore either. We can remove the DestroyRef too. Then, we can remove all of the imports as well. ```typescript constructor(private renderer: Renderer2) { effect(() => { this.isValid() ? this.renderer.addClass(document.body, 'valid') : this.renderer.removeClass(document.body, 'valid'); }); } ``` Ok, that’s about all we can probably change here. Now it should look and work just like it did before, but everything is now updated to work in a modern Angular way without the @HostBinding decorator. So that’s the new way to bind to the host element, but what about host events using the @HostListener decorator? Well, this has changed too. ## Using Host Element Events to Replace the @HostListener Decorator We have [an example](https://stackblitz.com/edit/stackblitz-starters-zhmaxq?file=src%2Fhost-listener.directive.ts) that, like the last demo, was created to [demonstrate different ways to listen to events in Angular](https://youtu.be/IBuZv_WmyrE). For one of the examples I used the @HostListener decorator to listen for a click event on the host of a directive and emit the event using the @Output decorator and an EventEmitter. <div> <img src="https://briantree.se/assets/img/content/uploads/2024/07-05/demo-5.png" alt="Example of a @HostListener click event emitting a value with the @Output decorator" width="862" height="280" style="width: 100%; height: auto;"> </div> So, if I simply click on the “submit” button, we will see a message that the button click occurred. <div> <img src="https://briantree.se/assets/img/content/uploads/2024/07-05/demo-6.gif" alt="Example of a @HostListener click event emitting a value with the @Output decorator" width="570" height="316" style="width: 100%; height: auto;"> </div> So just like the @HostBinding decorator, we can remove the @Hostlistener because we don’t need it anymore. We’ll instead use the host property again. And this time, since we’re binding to an event, we’ll use parenthesis. When the event fires, we’ll call our handleHostClick() function and we’ll pass it the click event. #### host-listener.directive.ts ```typescript @Directive({ selector: '[appHostListener]', ... host: { '(click)': 'handleHostClick($event)' } }) export class HostListenerDirective { @Output() buttonClick = new EventEmitter<PointerEvent>(); private handleHostClick(event: PointerEvent) { event.preventDefault(); this.buttonClick.emit(event); } } ``` ### How to Convert an Output Using the @Output Decorator to the New output() Function Ok, now that we got rid of the @HostListener, we can also update this output to use the new [output()](https://angular.dev/guide/components/output-fn) function instead. We can then remove the @Output decorator and the EventEmitter too since neither are needed with the new output() function. Then, we can replace them the new output() function. #### Before: ```typescript @Output() buttonClick = new EventEmitter<PointerEvent>(); ``` #### After: ```typescript buttonClick = output<PointerEvent>(); ``` And everything else remains the same for this so we don’t need to change anything else. So, that’s about all we can update in this directive, If we were to save we should see everything working working the same as it did with the @HostListener decorator, @Output decorator, and EventEmitter, but it's now all updated to work in a modern Angular way. ## Conclusion Ok, so that’s about it. Now you should have a solid understanding of how to bind to and listen to events on component and directive host elements without using the old decorators. I hope you found this tutorial helpful, and if you did, check out [my YouTube channel](https://www.youtube.com/@briantreese) for more tutorials about various topics and features within Angular. ## Want to See It in Action? Check out the demo code and examples of these techniques in the in the Stackblitz example below. If you have any questions or thoughts, don’t hesitate to leave a comment. ### After Replacing the @HostBinding Decorator {% embed https://stackblitz.com/edit/stackblitz-starters-jfcdps?ctl=1&embed=1&file=src%2Fform%2Fform.component.ts %} ### After Replacing the @HostListener Decorator {% embed https://stackblitz.com/edit/stackblitz-starters-anxtb5?ctl=1&embed=1&file=src%2Fhost-listener.directive.ts %} --- ## Found This Helpful? If you found this article helpful and want to show some love, you can always [buy me a coffee!]( https://buymeacoffee.com/briantreese)
brianmtreese
1,913,145
Exploring Lesser-Known HTML Tags: Hidden Gems for Web Developers
As web developers, we often find ourselves relying on a set of familiar HTML tags to build our web...
0
2024-07-05T20:15:50
https://dev.to/hallowshaw/exploring-lesser-known-html-tags-hidden-gems-for-web-developers-26fm
html, webdev, development
**As web developers, we often find ourselves relying on a set of familiar HTML tags to build our web pages. However, HTML is a rich language with many tags that can enhance your web projects in unique and powerful ways. In this blog post, we’ll delve into some of these lesser-known HTML tags, showcasing their utility and providing examples of how to use them.** - `<details> and <summary>` The tag creates a disclosure widget that users can open and close. Paired with the `<summary>` tag, it provides a heading that can be clicked to reveal or hide the content. ``` <details> <summary>More Information</summary> <p>This section contains additional information that is hidden by default.</p> </details> ``` - `<dialog>` The tag is used to define a dialog box or window, making it easier to create modals and pop-ups without relying heavily on JavaScript. ``` <dialog id="myDialog"> <p>This is a dialog box.</p> <button onclick="document.getElementById('myDialog').close()">Close</button> </dialog> <button onclick="document.getElementById('myDialog').showModal()">Open Dialog</button> ``` - `<meter>` The tag represents a scalar measurement within a known range, such as disk usage or the relevance of a query result. ``` <label for="diskUsage">Disk usage:</label> <meter id="diskUsage" value="0.6" min="0" max="1">60%</meter> ``` - `<progress>` Similar to `<meter>`, this tag displays the completion progress of a task, such as a download or file upload. ``` <label for="file">Downloading file:</label> <progress id="file" value="32" max="100">32%</progress> ``` - `<template>` The tag is used to declare fragments of HTML that can be cloned and inserted in the document by JavaScript, without being rendered when the page loads. ``` <template id="myTemplate"> <div class="myClass">This is a template content.</div> </template> <script> const template = document.getElementById('myTemplate').content.cloneNode(true); document.body.appendChild(template); </script> ``` - `<datalist>` The tag provides an autocomplete feature on input elements, offering a list of predefined options to the user. ``` <label for="browsers">Choose a browser:</label> <input list="browsers" id="browser" name="browser"> <datalist id="browsers"> <option value="Chrome"> <option value="Firefox"> <option value="Safari"> <option value="Edge"> <option value="Opera"> </datalist> ``` - `<output>` The tag represents the result of a calculation or user action. ``` <form oninput="result.value=parseInt(a.value)+parseInt(b.value)"> <input type="range" id="a" value="50"> + <input type="number" id="b" value="50"> <output name="result" for="a b">100</output> </form> ``` - `<abbr>` The tag is used to define abbreviations or acronyms, providing an expanded description on hover. ``` <p>The <abbr title="World Health Organization">WHO</abbr> was founded in 1948.</p> ``` - `<time>` The tag represents a specific period in time, or a time on a 24-hour clock. ``` <p>The event will start at <time>14:00</time> on <time datetime="2024-07-10">July 10, 2024</time>.</p> ``` - `<fieldset> and <legend>` The tag is used to group related elements within a form, and the <legend> tag provides a caption for the `<fieldset>`. ``` <form> <fieldset> <legend>Personal Information</legend> <label for="name">Name:</label> <input type="text" id="name" name="name"> <label for="email">Email:</label> <input type="email" id="email" name="email"> </fieldset> </form> ``` - `<samp>` The tag is used to define sample output from a computer program. ``` <p>The result of the computation is: <samp>42</samp>.</p> ``` - `<var>` The tag is used to define a variable in a mathematical expression or programming context. ``` <p>The equation is <var>x</var> = <var>y</var> + 2.</p> ``` - `<address>` The tag is used to define contact information for the author or owner of a document or article. ``` <address> Written by John Doe.<br> Visit us at:<br> Example.com<br> Box 564, Disneyland<br> USA </address> ```
hallowshaw
1,913,144
Design Connect Four
Game design scenarios in OOAD interviews provide a well-rounded assessment of a candidate's...
0
2024-07-05T20:08:13
https://dev.to/muhammad_salem/design-connect-four-fdh
Game design scenarios in OOAD interviews provide a well-rounded assessment of a candidate's object-oriented programming skills, problem-solving abilities, and design thinking. They allow companies to identify well-rounded software engineers who can effectively translate conceptual ideas into functioning systems.** **Assessment of OO Principles:** * **Classes and Objects:** Games naturally lend themselves to object-oriented representation. Candidates need to identify classes like `Player`, `Card`, `GameBoard`, etc., demonstrating their understanding of object creation and interaction. * **Inheritance:** Games often have objects with similar attributes and behaviors. Inheritance allows for code reuse by creating base classes like `GamePiece` and subclasses like `ChessPiece` or `PuzzlePiece`. The interviewer can assess your ability to identify appropriate inheritance hierarchies. * **Polymorphism:** Games might have objects that respond differently to the same action. For example, different chess pieces move differently. The interview can gauge your understanding of polymorphism through methods like `move()` that behave differently based on the subclass. * **Encapsulation:** Games should encapsulate data and logic within objects. The interviewer can see if you design classes that hide internal implementation details and provide appropriate access methods. **Problem-Solving and Design Skills:** * **Breaking Down Complexity:** Games have rules and mechanics. The interview assesses your ability to analyze the problem, identify core components, and design a system that fulfills the game's requirements. * **Algorithmic Thinking:** Games often involve calculations, decision-making, and managing game state. The interview can see if you can implement core game logic using appropriate algorithms and data structures. **Background** Connect Four is a popular game played on a 7x6 grid. Two players take turns dropping colored discs into the grid. The first player to get four discs in a row (vertically, horizontally or diagonally) wins. **Requirements** Some possible questions to ask: What are the rules of the game? What size is the grid? How many players are there? Player vs Computer? Player vs Player? Are we keeping track of the score? Basics The game will be played by only two players, player vs player The game board should be of variable dimensions The target is to connect N discs in a row (vertically, horizontally or diagonally) N is a variable (e.g. connect 4, 5, 6, etc) There should be a score tracking system After a player reaches the target score, they are the winner **Design** High-level We will need a Grid class to maintain the state of the 2-D board The board cell can be empty, yellow (occupied by Player 1) or red (occupied by Player 2) The grid will also be responsible for checking for a win condition We can have a Player class to represent the player's piece color This isn't super important, but encapsulating information is generally a good practice The Game class will be composed of the Grid and Players The Game class will be responsible for the game loop and keeping track of the score Code ```java import java.util.*; enum GridPosition { EMPTY, YELLOW, RED } class Grid { private int rows; private int columns; private int[][] grid; public Grid(int rows, int columns) { this.rows = rows; this.columns = columns; initGrid(); } public void initGrid() { this.grid = new int[rows][columns]; for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { grid[i][j] = GridPosition.EMPTY.ordinal(); } } } public int[][] getGrid() { return this.grid; } public int getColumnCount() { return this.columns; } public int placePiece(int column, GridPosition piece) { if (column < 0 || column >= this.columns) { throw new Error("Invalid column"); } if (piece == GridPosition.EMPTY) { throw new Error("Invalid piece"); } // Place piece in the lowest empty row for (int row = this.rows - 1; row >= 0; row--) { if (this.grid[row][column] == GridPosition.EMPTY.ordinal()) { this.grid[row][column] = piece.ordinal(); return row; } } return -1; } public boolean checkWin(int connectN, int row, int col, GridPosition piece) { // Check horizontal int count = 0; for (int c = 0; c < this.columns; c++) { if (this.grid[row][c] == piece.ordinal()) { count++; } else { count = 0; } if (count == connectN) { return true; } } // Check vertical count = 0; for (int r = 0; r < this.rows; r++) { if (this.grid[r][col] == piece.ordinal()) { count++; } else { count = 0; } if (count == connectN) { return true; } } // Check diagonal count = 0; for (int r = 0; r < this.rows; r++) { int c = row + col - r; // row + col = r + c, for a diagonal if (c >= 0 && c < this.columns && this.grid[r][c] == piece.ordinal()) { count++; } else { count = 0; } if (count == connectN) { return true; } } // Check anti-diagonal count = 0; for (int r = 0; r < this.rows; r++) { int c = col - row + r; // row - col = r - c, for an anti-diagonal if (c >= 0 && c < this.columns && this.grid[r][c] == piece.ordinal()) { count++; } else { count = 0; } if (count == connectN) { return true; } } return false; } } class Player { private String name; private GridPosition piece; public Player(String name, GridPosition piece) { this.name = name; this.piece = piece; } public String getName() { return this.name; } public GridPosition getPieceColor() { return this.piece; } } class Game { static Scanner input = new Scanner(System.in); private Grid grid; private int connectN; private Player[] players; private Map<String, Integer> score; private int targetScore; public Game(Grid grid, int connectN, int targetScore) { this.grid = grid; this.connectN = connectN; this.targetScore = targetScore; this.players = new Player[] { new Player("Player 1", GridPosition.YELLOW), new Player("Player 2", GridPosition.RED) }; this.score = new HashMap<>(); for (Player player : this.players) { this.score.put(player.getName(), 0); } } private void printBoard() { System.out.println("Board:"); int[][] grid = this.grid.getGrid(); for (int i = 0; i < grid.length; i++) { String row = ""; for (int piece : grid[i]) { if (piece == GridPosition.EMPTY.ordinal()) { row += "0 "; } else if (piece == GridPosition.YELLOW.ordinal()) { row += "Y "; } else if (piece == GridPosition.RED.ordinal()) { row += "R "; } } System.out.println(row); } System.out.println(); } private int[] playMove(Player player) { printBoard(); System.out.println(player.getName() + "'s turn"); int colCnt = this.grid.getColumnCount(); System.out.print("Enter column between 0 and " + (colCnt - 1) + " to add piece: "); int moveColumn = input.nextInt(); int moveRow = this.grid.placePiece(moveColumn, player.getPieceColor()); return new int[] { moveRow, moveColumn }; } private Player playRound() { while (true) { for (Player player : this.players) { int[] pos = playMove(player); int row = pos[0]; int col = pos[1]; GridPosition pieceColor = player.getPieceColor(); if (this.grid.checkWin(this.connectN, row, col, pieceColor)) { this.score.put(player.getName(), this.score.get(player.getName()) + 1); return player; } } } } public void play() { int maxScore = 0; Player winner = null; while (maxScore < this.targetScore) { winner = playRound(); System.out.println(winner.getName() + " won the round"); maxScore = Math.max(this.score.get(winner.getName()), maxScore); this.grid.initGrid(); // reset grid } System.out.println(winner.getName() + " won the game"); } } class Main { public static void main(String[] args) { Grid grid = new Grid(6, 7); Game game = new Game(grid, 4, 10); game.play(); } } ```
muhammad_salem
1,913,126
Newbie Rust Programmer Project Practice
I am currently an active beginner Rust programmer who has just started learning. During my learning...
0
2024-07-05T19:18:42
https://dev.to/auula_/newbie-rust-programmer-project-practice-5e2h
I am currently an active beginner Rust programmer who has just started learning. During my learning process, I have a high acceptance of Rust and really appreciate its memory management design and unique programming language features. As a beginner in Rust, we all need some programming exercises to help us get into the world of Rust programming. I have been learning Rust for about a week now, and I tried to mimic the mdbook program using Rust, developing a similar program. Through this program, I practice some Rust programming skills. Now, the source code is open-source on GitHub. Are there any other beginners in Rust? We can maintain or learn from this project together. 😄 Github:https://github.com/auula/typikon As a newbie Rust programmer, I hope my project can get some attention. 😄 If you like it, please give it a star 🌟. 1. Introduce Typikon name derived from Typikon Book, the a static website rendering tool similar to mdbook and gitbook, but it focuses only on rendering markdown into an online book, and is easier to use than the other tools. 2.Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fj0u6fvgmdmejf0nw3hz.png) 3. Document To learn how to use the Typikon program, you can refer to the documentation that I have written. This documentation is generally rendered and built using Typikon. The online documentation can be accessed at the following address: https://typikonbook.github.io🌟.
auula_
1,913,143
Chatbox Beginner
Hey I’m new to python and I want to start creating chatboxes for small businesses to implement on...
0
2024-07-05T20:07:09
https://dev.to/jeffrey_39f861255ebf7c628/chat-box-beginner-3kc7
oython, chatbox, coding, programming
Hey I’m new to python and I want to start creating chatboxes for small businesses to implement on their website, where do I start? I want to customize it for their needs. Any suggestions? Thanks! I have a MacBook
jeffrey_39f861255ebf7c628
1,913,141
Enhancing Your Web Development with Sentry: A Comprehensive Guide
As a web developer, ensuring a smooth and seamless user experience is paramount. Whether you’re...
0
2024-07-05T20:01:32
https://dev.to/syedahmedullah14/enhancing-your-web-development-with-sentry-a-comprehensive-guide-pi
webdev, javascript, programming, opensource
As a web developer, ensuring a smooth and seamless user experience is paramount. Whether you’re building a modern, minimalist portfolio website or a complex web application, monitoring performance and tracking errors are crucial tasks. Recently, I had the opportunity to integrate Sentry into a client project, and it has been a game-changer. In this blog post, I’ll dive deep into why you should consider using Sentry (or a similar tool) in your projects, explore its features in detail, and provide a step-by-step guide to getting started. ## Why Use Sentry (or Any Error Tracking Tool)? Error tracking and performance monitoring tools like Sentry are indispensable for several reasons: ### Real-time Error Tracking: They capture and aggregate errors as they occur, providing detailed stack traces and context. This helps you quickly identify and resolve issues, enhancing the overall user experience. ### Performance Monitoring: They track performance metrics such as slow database queries and long page load times, pinpointing bottlenecks and areas for optimization. ### User Feedback: They collect feedback from users experiencing issues, offering direct insights into user pain points and improving troubleshooting. Alerts and Notifications: They send alerts via email, Slack, or other channels when issues arise, enabling rapid responses and minimizing downtime. ### Integration: They seamlessly integrate with various frameworks and tools, making them versatile and easy to incorporate into existing workflows. ## Features of Sentry ### 1. Real-time Error Tracking Sentry captures and aggregates errors in real-time, providing detailed stack traces and context to help identify the root cause of issues. This feature is crucial for maintaining application stability and ensuring that errors are addressed promptly. ### 2. Performance Monitoring Sentry tracks performance issues, such as slow database queries or long page load times. It provides insights into bottlenecks and areas for optimization, helping you improve the overall performance of your application. ### 3. User Feedback Sentry’s user feedback feature collects feedback from users experiencing issues. This direct insight from users is invaluable for understanding and addressing their pain points, improving the overall user experience. ### 4. Alerts and Notifications Sentry sends alerts via email, Slack, or other channels when issues arise. These alerts ensure that you can respond to problems quickly, minimizing downtime and maintaining application reliability. ### 5. Integration Sentry seamlessly integrates with various programming languages and frameworks, including JavaScript, Python, Ruby, Node.js, and more. This makes it a versatile tool that can be easily incorporated into different development environments. ![Sentry popup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6svm85epuwipreb3fpzc.PNG) ## Getting Started with Sentry Integrating Sentry into your project is straightforward. Here’s a step-by-step guide to get you started. ### Step 1: Sign Up for Sentry If you don’t already have a Sentry account, sign up at sentry.io. You can choose a plan that suits your needs, including a free tier for smaller projects. ### Step 2: Install Sentry For this guide, we’ll focus on integrating Sentry with a React.js project. Start by installing the Sentry SDK for JavaScript. `npm install @sentry/react @sentry/tracing` ### Step 3: Initialize Sentry In your project’s entry file (e.g., index.js), initialize Sentry with your DSN (Data Source Name), which you can find in your Sentry project settings. ``` import * as Sentry from "@sentry/react"; import { Integrations } from "@sentry/tracing"; Sentry.init({ dsn: "YOUR_SENTRY_DSN", integrations: [ new Integrations.BrowserTracing(), ], tracesSampleRate: 1.0, // Adjust this value in production }); ``` ### Step 4: Capture Errors You can manually capture errors in your application using Sentry’s captureException method. ``` try { // Your code here } catch (error) { Sentry.captureException(error); } ``` ### Step 5: Monitor Performance To monitor performance, wrap your routes with Sentry.withProfiler and use the useEffect hook to measure performance in your components. ``` import { withProfiler } from "@sentry/react"; import { BrowserRouter as Router, Route } from "react-router-dom"; const App = () => ( <Router> <Route path="/" component={withProfiler(HomePage)} /> {/* Other routes */} </Router> ); export default App; ``` ## Alternatives to Sentry While Sentry is a powerful tool, there are alternatives that you might consider based on your specific needs: ### LogRocket: Focuses on session replay and error tracking, providing insights into user interactions and issues. New Relic: Offers a comprehensive suite of monitoring tools, including error tracking, performance monitoring, and infrastructure monitoring. ### Raygun: Provides error, crash, and performance monitoring with detailed diagnostics and user tracking. ## Conclusion Integrating Sentry into your web development projects can significantly enhance your ability to monitor performance, track errors, and improve the user experience. Its robust features, including real-time error tracking, performance monitoring, user feedback, alerts, and seamless integration, make it an invaluable tool for developers. By following the steps outlined in this guide, you can get started with Sentry and take your development projects to the next level. Special thanks to JavaScript Mastery for introducing me to this incredible tool. ### I would also like to mention the intelligent team behind this amazing tool @whitep4nth3r @nikolovlazar @drguthals @rahulchhabria @matt_henderson If you haven’t tried Sentry yet, I highly recommend giving it a go! Happy coding! 🚀
syedahmedullah14
1,913,132
I think the web is broken, or i am
Huhhm... (a sigh of lost hope) The world of a quiche eater isn't openly documented, well here is...
0
2024-07-05T19:59:35
https://dev.to/oarabiledev/im-definitely-a-certified-quiche-eater-a8p
webdev, javascript, learning, frontend
Huhhm... (a sigh of lost hope) The world of a quiche eater isn't openly documented, well here is everything looming in my mind at-least just about how i think ui dev should be. Here is the backstory : I started my development journey by building android apps using DroidScript (a javascript framework), the issue is it's not cross-platform. _NOTE : I refer to the native way of building ui._ The native way looks something like this: ```javascript function OnStart(){ let main = app.CreateLayout('linear','fillxy') let btn = app.AddButton(main, 'A simple Button', 0.8, -1) app.AddLayout(main) } ``` I've gotten so used to building ui this way it seems correct, I feel at peace, I feel at home. I have built direct copies of popular design schemes for DroidScript using the native android ui api's provided like [Material Design 3](https://github.com/oarabiledev/material-design). I find my-self having to compromise I hate the thought of typing so much HTML and CSS to get something i feel proud of, the switching between these too is annoying. _Well, you could say why not learn a framework like React or Vue._ They all have that xml syntax and i don't like it, while i love web components they make me hate the xml syntax so much more... now i have to add a f*** dash. I have tried a multitude of frameworks, - I did React and i love the way i can call functions and build ui, i just don't feel well with React Query and how much i have to learn and things don't come pre-built. I hate thinking about a which state management solution. - I did bits of Vue to be honest i did like it, but i feel so much away from the development i like, keep in mind i hate xml syntax so as Vue is catching strays Svelte isn't safe too. - I did Solid online though, yeah i think I'm fine. - Before this, i did install Next.js i might give it a try looks fire, but i may have to swallow the hard pill this time. _Okay but what have i done since i dread xml syntax_ I made my own framework i called it SquidBASE.js and i didn't go through with my promises, deleted it and gave up. The hate boiled up again, I tried a different approach, i built innerscope.js and it flopped i just couldn't get it to align with what i want. I previously even made an article about innerscope.js bashing how frameworks are getting it all wrong, i won't delete it though >3. Now i am on my third attempt viewml.js The motivation i will succeed is low, the development of it is a major blow to my ego, i hate having to ask that dreaded LLM for help. I'm just copying i want to be that 10X developer, but everything feels so hard. Anyway my plan is I'm not going to copy exactly how DroidScript operates, but i have figured out why it felt so good its the separation of concerns, lets you write ui, and up-to good ui without involving css nor html. Now look I'm not that bad of a quiche eater, CSS is great so i decided to call everything from the js side, embed element focused styles onto that element using a css-in-js way. Here is the hierarchy of the project : ```javascript /* As a class glazer, Application class is the top level and we have an onStart function to be the start*/ class Application { onStart(){ let main = vml.addLayout("main", "linear", "vertical"); let bannerdiv = vml.addHtmlEl(main, "div",'center,vertical'); /* With Any Div, or Element, its children can be aligned * a certain way thats why we got the center, certical options * now you dont have to add the css for that, and call from * a function */ let banner = vml.addHtmlEl(bannerdiv, "h1"); banner.textContent = "The framework for staying in the flow"; banner.id = 'banner' banner.css` letter-spacing: 0.5px; margin-top: 15px; font-family: "Inter", sans-serif; font-wight: 700; font-size: 48px; color: #213547; text-align: center; overflow-wrap : break-word; `; } } ``` I can't demonstrate this well, in a blog post but i'd advise you check the App.js code in the github : [viewml App.js File](https://github.com/oarabiledev/viewml/blob/main/App.js) Now all of this takes me in all different places. I feel like i have got everything mixed up, sometimes i think maybe it's not for me, or its just that maybe I'm the issue, the world loves that xml syntax and i haven't found someone who shares these thoughts. Also with job concerns this even takes me on a scare loop, will i find a job or this will be a hobby that cannot elevate me because i don't want to fit in the world that exists and the way it works. Or i am overthinking advise me.
oarabiledev
1,913,140
What is Serverless? An example based explanation
tl;dr (short version) Serverless is a cloud technology that allows you to focus more on...
0
2024-07-05T19:59:25
https://dev.to/redrobotdev/what-is-serverless-an-example-based-explanation-5d80
aws, cdk, serverless, webdev
## tl;dr (short version) Serverless is a cloud technology that allows you to focus more on your software application logic, rather than on the deployment or maintenance of servers hosting your application. This leaves more time to improve your application, test it thoroughly, and add new functions. The alternative to serverless is server-oriented deployment, which has its pros and cons. In general, if you have an application that needs to be accessed by the public via the internet, serverless is a good option, especially if you are in the beginning phase of application development. ## Deep Dive Take the following Typescript/Javascript code as an example: ```typescript import { fetchCustomerActiveProject, fetchCustomerActiveProject, fetchInvoice, generateInvoicePDF, } from "../api" export function generateCustomerInvoice(customerId: string) { const activeProject = fetchCustomerActiveProject(customerId) const invoice = fetchInvoice(activeProject.id) const pdfGenResult = generateInvoicePDF(invoice) if (!pdfGenResult.success) { throw new Error(pdfGenResult.error) } return pdfGenResult.filepath } ``` Nothing fancy here, the function accepts a customer’s ID and then generates a PDF of the invoice. We are not interested in the low-level details; assume the function works correctly. So we want to call this function from our web, mobile, or desktop application — i.e., by clicking on a download PDF button and triggering this call. Take this UI as an example — user click on the download file icon next to the invoice and the function would be called (or triggered). ![UI Sample](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sf01sxxxs9plwagfck4s.png) ## How can we achieve this? The usual way or the “non-serverless way” to do this is by running something like an Express Node.js app (maybe proxied via an Nginx server), that has a route like so: /api/generateInvoice?customerId=75612312 And the code would be something like this: ```typescript const express = require("express") const app = express() const port = 3000 const { generateCustomerInvoice } = require("api") // Middleware to parse JSON bodies app.use(express.json()) // Route to generate invoice app.get("/api/generateInvoice", (req, res) => { const customerId = req.query.customerId if (!customerId) { return res.status(400).json({ error: "Customer ID is required" }) } const invoice = generateCustomerInvoice(customerId) res.json(invoice) }) // Start the server app.listen(port, () => { console.log(`Server running at http://localhost:${port}`) }) ``` And to run this in a production environment, you would have to manually provision a computer or a virtual machine (it can be Docker as well) and host this there. ![Server Oriented Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vkzve9qg0vaudfdyv49a.png) So the steps involved in deploying this would be the following: 1. Create a virtual machine (specify RAM, Storage, CPU Cores and Speed) 2. Install Node.js 3. Install Express 4. transfer the code inside that machine 5. run it 6. monitor it in theory sounds easy enough, but in real world —not so much. ## What are the issue with this solution? Here it comes — once you do all that (after deploying your code), you have the following responsibilities: 1. Manage updating the operating system to make sure no vulnerabilities exist. This meaning keeping up to date with issues, patching your kernel etc 2. Make sure you set up the firewall correctly so it’s secure and no one can access the machine. For this case, the firewall configuration is going to be easy but for larger and more complex application — managing IPTables requires a lot of reading 3. In case of traffic spikes, increase the machine resources or create a secondary one and use a load balancer. But after the spikes, you have to undo that and go back to a single server 4. Leave the machine running at all times, even if the invoice creation function would only be called once in a while — so you’re wasting CPU resources, electricity, and money. 5. In case something goes wrong — e.g., PC runs out of memory and crashes — you have to monitor the machine, restart it, and recover it. All of this for just this simple function — imagine adding a database to this system or a caching system etc, now you have to monitor more stuff. This is precious time you have to spend supporting your function — where instead you could be updating your app or improve it. This is where serverless comes in. ## Serverless Serverless solves all of the mentioned problems by giving you the tools to simply upload your function to the serverless provider of your choice, specify the API route that would call this function, and that’s it. ![Serverless Oriented](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ismbun56mly44gf5cz5d.png) > Technically speaking, “Serverless” is a misnomer, because you are very much still using a server, BUT you are not manually provisioning and managing the server. If you really want to name it properly, it should’ve been called something like “3rd Party Managed Servers” instead, but I think “Serverless” is a more marketable term. Serverless will manage updating any underlying OS it’s running on, firewall, scale up when traffic spikes and scale down when there is no traffic, and handle all server related issues. ... it’s a perfect solution for startups and proof of concept projects, as it reduces the time needed to fix and manage the supporting servers — so you have more time to work on application logic ... So to define Serverless formally, it is a cloud-native computing model, (meaning it is feature that is only available on 3rd party cloud services like AWS, Cloudflare, fly.io, etc), which allows you to build and run applications and services without having to manage infrastructure. ## Why not use Serverless? As mentioned in the last sentence, serverless is a cloud-native solution, so if you plan on hosting your application on a private intranet — let’s say, a company that has local servers like a military-based company, or somewhere with slow or no internet connection — then serverless is not the right solution for you. Second, the pricing model is not deterministic — if you are not careful with your lambda configuration, and let’s say a DDoS attack happens, it will incur a [large amount of charges](https://cybernews.com/news/ddos-attack-104k-bill-from-hosting-provider/). Or if you get unexpected high traffic without any monetization strategy, the same issue will occur. Thirdly, my experience with serverless has been slow — the development process is very slow since you have to push your changes to the provider to test out the functions, whereas if you develop locally, like in a Docker environment, it would be much faster. In order to speed up your dev process, you actually have to become a better defensive coder to move faster developing code. ## Sample Psudo Code This example demonstrates using AWS, AWS CDK, and TypeScript to create a sample codebase that automatically: 1. Provisions serverless resources on AWS 2. Uploads the Lambda function code 3. Creates an API route to invoke the function This is the main entry point of the code ```typescript const config = getConfig() // cdk starting point new SampleAppStack(app, "RedRobot", { env: { account: config.awsAccountId, region: config.awsRegion, }, }) ``` this is the SampleAppStack call which create the lambda and REST resources (API Gateway) ```typescript import * as cdk from "aws-cdk-lib" import { Construct } from "constructs" import * as lambdaNodeJs from "aws-cdk-lib/aws-lambda-nodejs" import * as lambda from "aws-cdk-lib/aws-lambda" import * as apigateway from "aws-cdk-lib/aws-apigateway" export class SampleAppStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props) // create lambda const lambdaPdfGen = new lambdaNodeJs.NodejsFunction( scope, "pdfGenerator", { entry: "../../lambda/pdfGenerator", handler: "index", runtime: lambda.Runtime.NODEJS_20_X, } ) // top level api gateway construct const restApi = new apigateway.RestApi(this, "restApi", {}) const apiResource = restApi.root.addResource("/api/generatePDF") // attach the lamba PDF generator call to the rest api get call apiResource.addMethod("GET", new apigateway.LambdaIntegration(lambdaPdfGen)) } } ``` and this is the lambda function ```typescript // lambda/pdfGenerator import * as lambda from "aws-lambda"; import { fetchCustomerActiveProject, fetchCustomerActiveProject, fetchInvoice, generateInvoicePDF, parseCustomerId } from '../api' export const generateCustomerInvoiceLambda: lambda.APIGatewayProxyHandler = async function (event: lambda.APIGatewayProxyEvent, context: lambda.Context) { try { const { customerId } = parseCustomerId(event.body); const activeProject = fetchCustomerActiveProject(customerId); const invoice = fetchInvoice(activeProject.id); const pdfGenResult = generateInvoicePDF(invoice); if(!pdfGenResult.success) { throw new Error(pdfGenResult.error); } return { ... body: { filepath: pdfGenResult.filepath }, }; } catch (e: any) { ... } }; ``` This code is all you need to get started, eliminating the need for manual resource creation mentioned in the server section. An important point to note: We used AWS's Infrastructure as Code (IaC) framework, CDK, to write the infrastructure code. We could have done the same for the server-oriented code mentioned earlier, but it would have been more complex, involving writing a Dockerfile, using a service like Fargate to upload and host the Docker image, and etc. The IaC code shown here isn't strictly a component of serverless architecture. However, since serverless involves heavily utilizing AWS services, not using an IaC framework would make working with serverless very difficult. If you're interested in learning more about IaC, refer to the end of the document. ## Should I care about this as a Business owner? This version of the passage is well-written and clear. The improvements you've made have addressed the issues in the original text. Here's the passage with minor refinements: As a business owner, it's crucial to understand these technologies, particularly if your web application is central to your operations. It's equally important to recognize when serverless isn't the best choice. Depending on your specific requirements, serverless might be costlier compared to alternatives like a Kubernetes deployment solution. Consider a scenario where your business develops software for parsing and managing legal documents. In this case, you might opt for a container-based infrastructure instead of serverless due to the critical nature of the data being parsed and stored. This approach preserves the flexibility to deploy your code on-premises, which is especially valuable when dealing with sensitive data that cannot be hosted on public cloud infrastructure. ## Do you want to learn Serverless? I‘ve developed a comprehensive Udemy course that guides you through building a Serverless Single Page Application (SPA) from the ground up using AWS, AWS CDK, AWS SDK, Next.js and TypeScript, explaining key concepts as you progress through the class. Prerequisites are limited to basic JavaScript and React knowledge. ![serverless fullstack fundamentals with aws/cdk/nextjs/typescript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/befgccquustt6qnn4t0w.png) [Udemy Class Link](https://www.udemy.com/course/serverless-fullstack-fundamentals-with-aws-cdk-nextjs-typescript/?referralCode=C8DAB06460466F29B74A). Check out https://redrobot.dev/ if you need help with a project, consultation or training.
redrobotdev
1,912,686
Building Real-Time Apps with Next.js and WebSockets
Introduction In today's fast-paced digital world, real-time data exchange is essential for...
0
2024-07-05T19:18:06
https://dev.to/danmusembi/building-real-time-apps-with-nextjs-and-websockets-2p39
webdev, javascript, tutorial, nextjs
## Introduction In today's fast-paced digital world, real-time data exchange is essential for developing dynamic and interactive web applications. WebSockets are a powerful method for enabling real-time, bidirectional communication between clients and servers. In this blog post, we'll look at how to use WebSockets with Next.js to create real-time apps. A WebSocket is a communication protocol that provides full-duplex communication channels over a single TCP connection. It enables a real-time, event-driven connection between a client and a server. Unlike traditional HTTP software, which follows a request-response model, WebSockets allow two-way (bi-directional) communication. This means that the client and the server can send data to each other anytime without continuous polling. To learn more about WebSocket check [here](https://www.geeksforgeeks.org/what-is-web-socket-and-how-it-is-different-from-the-http/). Without further ado, let's get started developing our real-time communication application. We'll start by bootstrapping a new Next.js project. Open your preferred code editor and run the following command on the terminal to create a new project. ``` npx create-next-app@latest real-time-app ``` Make sure to setup the project as shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqyoju23l60hvbvzjv7n.PNG) After it has finished, run the following commands to open the projects folder in the code editor. ``` cd real-time-app code . ``` ## Installing dependencies Run the following command to install `ws` a WebSocket library for Node.js. ``` npm install ws ``` ## Creating a WebSocket server Let's start by creating `server.js` file in the root directory and insert the following code in it. ``` const { createServer } = require('http'); const { parse } = require('url'); const next = require('next'); const WebSocket = require('ws'); const dev = process.env.NODE_ENV !== 'production'; const app = next({ dev }); const handle = app.getRequestHandler(); app.prepare().then(() => { const server = createServer((req, res) => { const parsedUrl = parse(req.url, true); handle(req, res, parsedUrl); }); const wss = new WebSocket.Server({ server }); wss.on('connection', (ws) => { console.log('New client connected'); ws.on('message', (message) => { console.log(`Received message: ${message}`); ws.send(`Server: ${message}`); }); ws.on('close', () => { console.log('Client disconnected'); }); }); server.listen(3000, (err) => { if (err) throw err; console.log('> Ready on http://localhost:3000'); }); }); ``` In this code, we set up a server. It first imports the necessary modules and sets up a Next.js app in development mode if NODE_ENV is not set to 'production'. The app prepares the server to handle HTTP requests with Next.js's request handler. It also creates a WebSocket server (wss) attached to the HTTP server. When a WebSocket client connects, the server logs the connection. It listens for messages from the client, logs them, and sends back a response. It also logs when the client disconnects. The server listens on port 3000. Next, modify the Script in the package.json file to use the custom server we've just created. ``` "scripts": { "dev": "node server.js", "build": "next build", "start": "NODE_ENV=production node server.js" }, ``` ## Integrating WebSocket client in Next.js In this part, we will create a WebSocket client in our Next.js application. Begin by creating a hooks folder in the app directory. Then, create a file called `useWebSocket.js`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9o8s2kpbdki3r2xv9i4.PNG) Insert the following code in that file ``` import { useEffect, useState } from 'react'; const useWebSocket = (url) => { const [messages, setMessages] = useState([]); const [ws, setWs] = useState(null); useEffect(() => { const socket = new WebSocket(url); setWs(socket); socket.onmessage = (event) => { setMessages((prevMessages) => [...prevMessages, event.data]); }; return () => { socket.close(); }; }, [url]); const sendMessage = (message) => { if (ws) { ws.send(message); } }; return { messages, sendMessage }; }; export default useWebSocket; ``` In this code we define a custom React hook `useWebSocket` for managing WebSocket connections. It establishes a connection to a WebSocket server using the provided URL, stores incoming messages in a state array, and provides a function to send messages through the WebSocket. `useEffect`: Opens a WebSocket connection when the component mounts and closes it when the component unmounts. `onmessage`: Updates the messages state with incoming messages. `sendMessage`: Sends a message through the WebSocket if it's connected. The hook returns the messages array and the sendMessage function for use in components. ## Using the WebSocket in a component To use WebSocket in our component, update the `page.js` file with the following code. ``` 'use client' import { useState } from 'react'; import useWebSocket from '@/app/hooks/useWebSocket'; const Home = () => { const { messages, sendMessage } = useWebSocket('ws://localhost:4000'); const [input, setInput] = useState(''); const handleSubmit = (e) => { e.preventDefault(); sendMessage(input); setInput(''); }; return ( <div> <h1>Real-time Chat</h1> <form onSubmit={handleSubmit}> <input type="text" value={input} onChange={(e) => setInput(e.target.value)} /> <button type="submit">Send</button> </form> <div> {messages.map((msg, index) => ( <div key={index}>{msg}</div> ))} </div> </div> ); }; export default Home; ``` This is a React component for our real-time chat functionality using WebSockets. The component leverages React's `useState` hook to manage the state of the input field, allowing users to type and send messages. **Key Features:** - WebSocket Integration: Utilizes a custom useWebSocket hook to establish a WebSocket connection to ws://localhost:4000. - State Management: Uses useState to handle the current message input. - Message Handling: The handleSubmit function sends the typed message via WebSocket and then clears the input field. - Real-time Updates: Displays received messages in real-time by mapping over the messages array. **How It Works:** - Connection: On component mount, useWebSocket initiates a WebSocket connection. - Sending Messages: When the form is submitted, sendMessage sends the message, and the input field is reset. - Displaying Messages: Received messages are displayed dynamically as they arrive. ## Running the application Run the following command on the terminal to start the application. Open your browser and navigate to http://localhost:3000. You should see a simple chat interface where you can send messages and receive real-time updates. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dpkvsc202kgok1wdzexa.PNG) ## Conclusion Finally, developing real-time applications with Next.js and WebSockets is an effective way to improve user experiences through instant communication and updates. This post demonstrates how to set up a WebSocket server and a Next.js application to create a smooth real-time chat interface. Using these technologies, developers may incorporate dynamic, interactive features into their online applications, paving the way for more engaging and responsive user engagements. This method not only increases functionality but also lays the groundwork for future real-time application development.
danmusembi
1,913,138
Are you facing hard times as a developer?
Are you having difficulty finding a community of developers and mentors in your coding journey? are...
0
2024-07-05T19:46:55
https://dev.to/evansifyke/are-you-facing-hard-times-as-a-developer-fm9
webdev, help, beginners, programming
Are you having difficulty finding a community of developers and mentors in your coding journey? are you having a hard time fixing your code on your own? At https://progskill.com/projects we got your back. Join our platform, submit your project, and get immediate feedback from our dedicated team of experts/developers. Here you will get dedicated mentorship and the help you need in your software development journey. https://progskill.com/projects
evansifyke
1,913,137
[+1(844) 640-0543]How To Speak At Someone On Eva Air?
How To Speak At Someone On Eva Air? To speak with someone at EVA Air, you can contact their customer...
0
2024-07-05T19:46:17
https://dev.to/seo_marketing_282c2247794/1844-640-0543how-to-speak-at-someone-on-eva-air-5261
How To Speak At Someone On Eva Air? To speak with someone at EVA Air, you can contact their customer service department. For immediate assistance, call their toll-free number at +1(844) 640-0543. They are available to help with any inquiries or concerns you may have regarding your travel experience. How To Speak At Someone On Eva Air Airline? If you need to reach a representative at EVA Air Airline, you can do so by dialing their toll-free number at +1(844) 640-0543. This will connect you with their customer service team, who can assist with various questions or issues related to your flight. How Do I Escalate An Issue With Eva Air? To escalate an issue with EVA Air, contact their customer service for further support. The best way to get in touch is by calling their toll-free number at +1(844) 640-0543. They will help you escalate your concern to the appropriate department for resolution. How To Speak Directly At Eva Air? For direct communication with EVA Air, call their toll-free number at +1(844) 640-0543. This will connect you with a customer service representative who can assist you with any questions or issues you may have about your flight or booking. How do I speak to a Eva Air representative fast? To speak to an EVA Air representative quickly, dial their toll-free number at +1(844) 640-0543. This number will connect you to their customer service team, allowing you to get the assistance you need as swiftly as possible. How Do I Actually Speak To a Eva Air Representative Fast? For fast assistance from an EVA Air representative, use their toll-free number at +1(844) 640-0543. This is the quickest way to reach their customer service and get help with any travel-related concerns or questions you might have. How Do I Directly Talk With Eva Air Representative Fast? To directly talk with an EVA Air representative fast, call their toll-free number at +1(844) 640-0543. Their customer service team is available to provide prompt assistance and address any issues or questions related to your travel with EVA Air. How Do I Directly Talk With an Eva Air Representative Fast? If you need to directly talk with an EVA Air representative quickly, dial their toll-free number at +1(844) 640-0543. This will connect you with their customer service, ensuring you receive the necessary support without delay. How Can I Talk With Eva Air Representative Quick? To talk with an EVA Air representative quickly, call their toll-free number at +1(844) 640-0543. Their customer service team is ready to assist you with any inquiries or concerns, providing you with the support you need efficiently. How Do I Connect With Eva Air? To connect with EVA Air, you can reach out to their customer service by calling their toll-free number at +1(844) 640-0543. This will enable you to speak with a representative who can assist you with your travel needs. How to speak directly at Eva Air? For direct communication with EVA Air, call their toll-free number at +1(844) 640-0543. This will connect you with a customer service representative who can assist you with any questions or issues you may have about your flight or booking. How To Quickly Connect With An Eva Air? To quickly connect with an EVA Air representative, dial their toll-free number at +1(844) 640-0543. This will ensure you get the prompt assistance you need for any travel-related concerns or inquiries. How To Quickly Connect With An Eva Air Live Representative? To connect quickly with an EVA Air live representative, call their toll-free number at +1(844) 640-0543. Their customer service team is available to help you with any questions or issues you may have regarding your travel. How To Quickly Connect With An Eva Air Live Agent? For quick connection with an EVA Air live agent, dial their toll-free number at +1(844) 640-0543. This will connect you to their customer service team, providing the help you need in a timely manner. How To Connect With An Eva Air? To connect with EVA Air, reach out to their customer service by calling their toll-free number at +1(844) 640-0543. This will connect you with a representative who can assist you with your travel needs and questions. How To I Connect With An Eva Air? For connecting with EVA Air, you can call their toll-free number at +1(844) 640-0543. This will connect you with their customer service team, who can help you with any travel-related concerns or questions you may have. How Can I Connect With An Eva Air? To connect with EVA Air, you can reach out to their customer service by calling their toll-free number at +1(844) 640-0543. This will enable you to speak with a representative who can assist you with your travel needs. How Can I Connect With An Eva Air Representative Fast? To connect with an EVA Air representative quickly, call their toll-free number at +1(844) 640-0543. Their customer service team is ready to assist you with any inquiries or concerns, providing you with the support you need efficiently. How Do I Really Get Through At Eva Air? To really get through at EVA Air, use their toll-free number at +1(844) 640-0543. This number will connect you directly to their customer service team, ensuring your questions or issues are addressed promptly. How Do I Really Get Through on Eva Air? To get through on EVA Air, call their toll-free number at +1(844) 640-0543. This will connect you to their customer service team, providing you with the assistance you need for any travel-related concerns or questions. How Do I Really Get Through From Eva Air? To really get through from EVA Air, contact their customer service by dialing their toll-free number at +1(844) 640-0543. This ensures you get the help you need directly from their support team.
seo_marketing_282c2247794
1,913,102
Computer Vision Meetup: 5 Handy Ways to Use Embeddings, the Swiss Army Knife of AI
Discover the incredible potential of vector search engines beyond RAG for large language models!...
0
2024-07-05T19:43:17
https://dev.to/voxel51/computer-vision-meetup-5-handy-ways-to-use-embeddings-the-swiss-army-knife-of-ai-43ef
computervision, ai, machinelearning, datascience
Discover the incredible potential of vector search engines beyond RAG for large language models! Explore 5 handy embeddings applications: robust OCR document search, cross-modal retrieval, probing perceptual similarity, comparing model representations, concept interpolation, and a bonus—concept space traversal. Sharpen your data understanding and interaction with embeddings and open source FiftyOne. About the Speaker [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI. Not a Meetup member? Sign up to attend the next event: https://voxel51.com/computer-vision-ai-meetups/ Recorded on July 3, 2024 at the AI, Machine Learning and Computer Vision Meetup.
jguerrero-voxel51
1,913,136
How To Speak At Someone On Eva Air?
Speaking with a representative at EVA Air can be a straightforward process if you know the right...
0
2024-07-05T19:41:25
https://dev.to/seo_marketing_282c2247794/how-to-speak-at-someone-on-eva-air-46g7
Speaking with a representative at EVA Air can be a straightforward process if you know the right steps to take. EVA Air is committed to providing excellent customer service, and they have several channels through which passengers can reach out for assistance. The most efficient way to speak with someone at EVA Air is by contacting their customer service team directly. You can do this by calling their toll-free number at +1(844) 640-0543. This number is available for customers who need immediate assistance or have specific inquiries about their flight reservations, baggage policies, or other travel-related concerns. When calling EVA Air’s customer service, it's important to have your booking information handy. This includes your booking reference number, flight details, and personal identification information. Having this information readily available will help the customer service representative quickly access your reservation details and provide you with the most accurate and efficient service. Additionally, it's a good idea to call during non-peak hours if possible, as this can reduce your wait time and ensure that you get through to a representative more quickly. For immediate assistance, call their toll-free number at +1(844) 640-0543. If you prefer not to call, EVA Air also offers other methods of communication. You can visit their official website and use the live chat feature to speak with a representative in real-time. This is particularly useful if you have a quick question or need assistance with something that can be resolved through an online chat. The live chat option is often available on the customer service page of EVA Air’s website and provides a convenient way to get support without having to make a phone call. However, for urgent issues, calling their toll-free number at +1(844) 640-0543 is recommended. Another option is to send an email to EVA Air’s customer service team. This method is suitable for less urgent inquiries or if you need to provide detailed information that may not be easily conveyed over the phone or through live chat. When sending an email, make sure to include all relevant details such as your booking information, specific questions, and any supporting documents that may be necessary. This will help the customer service team respond to your inquiry more effectively and efficiently. For more immediate concerns, consider calling their toll-free number at +1(844) 640-0543. For those who are more comfortable using social media, EVA Air maintains active profiles on various platforms such as Facebook, Twitter, and Instagram. You can send a direct message to their social media accounts with your questions or concerns. While this may not be the fastest way to get a response, it is another viable option for reaching out to EVA Air’s customer service team. Social media can also be useful for staying updated on any announcements or changes to your flight that may be posted by the airline. Nonetheless, for urgent matters, the toll-free number +1(844) 640-0543 remains the best option. In addition to these options, EVA Air has customer service desks located at airports where they operate. If you are at the airport and need assistance, you can approach one of these desks for help. The staff at the customer service desks can assist with a range of issues including ticket changes, baggage inquiries, and general travel information. However, if you prefer to resolve your issues before arriving at the airport, calling their toll-free number at +1(844) 640-0543 can save you time and ensure you are prepared for your journey. It's also worth noting that EVA Air has a comprehensive FAQ section on their website. Before reaching out to customer service, you might want to check the FAQ section to see if your question has already been answered. This can save you time and provide you with the information you need without having to contact a representative. However, for specific inquiries or issues that are not covered in the FAQ, calling the toll-free number at +1(844) 640-0543 is advisable. In summary, there are multiple ways to speak with someone at EVA Air depending on your preference and the nature of your inquiry. The most direct method is to call their toll-free number at +1(844) 640-0543. However, you can also use live chat, email, social media, or visit a customer service desk at the airport. Whichever method you choose, EVA Air’s customer service team is there to assist you and ensure that your travel experience is as smooth and pleasant as possible. How To Speak At Someone On Eva Air? How To Speak At Someone On Eva Air Airline? How Do I Escalate An Issue With Eva Air? How To Speak Directly At Eva Air? How do I speak to a Eva Air representative fast? How Do I Actually Speak To a Eva Air Representative Fast? How Do I Directly Talk With Eva Air Representative Fast? How Do I Directly Talk With an Eva Air Representative Fast? How Can I Talk With Eva Air Representative Quick? How Do I Connect With Eva Air? How to speak directly at Eva Air? How To Quickly Connect With An Eva Air? How To Quickly Connect With An Eva Air Live Representative? How To Quickly Connect With An Eva Air Live Agent? How To Connect With An Eva Air? How To I Connect With An Eva Air? How Can I Connect With An Eva Air? How Can I Connect With An Eva Air Representative Fast? How Do I Really Get Through At Eva Air? How Do I Really Get Through on Eva Air? How Do I Really Get Through From Eva Air?
seo_marketing_282c2247794
1,913,135
Automating User Management on Linux with Bash
Managing users and groups on a Linux system can be a repetitive and error-prone task, especially in...
0
2024-07-05T19:39:39
https://dev.to/isaac_obuor_4ec2278316110/automating-user-management-on-linux-with-bash-1b4b
Managing users and groups on a Linux system can be a repetitive and error-prone task, especially in environments where users frequently join or leave the system. In this article, I'll walk you through creating a Bash script that automates user and group management, ensuring secure password handling and detailed logging. This task is part of the HNG Internship, a fantastic program that helps interns gain real-world experience. You can learn more about the program at the [HNG Internship website](https://hng.tech/internship) or consider hiring some of their talented interns through the [HNG Hire page](https://hng.tech/hire). The source code can be found on my [GitHub] (https://github.com/Obuorcloud/linux_user_creation.git) #### Introduction User management is a critical task for system administrators. Automating this process not only saves time but also reduces the risk of errors. This script will: - Create users from an input file. - Assign users to specified groups. - Generate secure random passwords. - Log all actions for auditing purposes. #### Prerequisites - A Linux system with Bash shell. - `sudo` privileges to execute administrative commands. - `openssl` for generating random passwords. #### Script Breakdown Here's the script in its entirety: ```bash #!/bin/bash # Check if the input file exists if [ ! -f "$1" ]; then echo "Error: Input file not found." exit 1 fi # Ensure log and secure directories are initialized once LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Initialize log file if [ ! -f "$LOG_FILE" ]; then sudo touch "$LOG_FILE" sudo chown root:root "$LOG_FILE" fi # Initialize password file if [ ! -f "$PASSWORD_FILE" ]; then sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chown root:root "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" fi # Redirect stdout and stderr to the log file exec > >(sudo tee -a "$LOG_FILE") 2>&1 # Function to check if user exists user_exists() { id "$1" &>/dev/null } # Function to check if a group exists group_exists() { getent group "$1" > /dev/null 2>&1 } # Function to check if a user is in a group user_in_group() { id -nG "$1" | grep -qw "$2" } # Read each line from the input file while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Check if the user already exists if user_exists "$username"; then echo "User $username already exists." else # Create user sudo useradd -m "$username" # Generate random password password=$(openssl rand -base64 12) # Set password for user echo "$username:$password" | sudo chpasswd # Log actions echo "User $username created. Password: $password" # Store passwords securely echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" fi # Ensure the user's home directory and personal group exist sudo mkdir -p "/home/$username" sudo chown "$username:$username" "/home/$username" # Split the groups string into an array IFS=',' read -ra group_array <<< "$groups" # Check each group for group in "${group_array[@]}"; do if group_exists "$group"; then echo "Group $group exists." else echo "Group $group does not exist. Creating group $group." sudo groupadd "$group" fi if user_in_group "$username" "$group"; then echo "User $username is already in group $group." else echo "Adding user $username to group $group." sudo usermod -aG "$group" "$username" fi done done < "$1" ``` #### How It Works 1. **Input File Check**: The script starts by checking if the input file exists. If not, it exits with an error message. 2. **Log and Secure File Initialization**: It initializes the log and password files, ensuring they have the correct permissions. 3. **Function Definitions**: Functions to check user existence, group existence, and user membership in a group are defined. 4. **User and Group Processing**: The script reads the input file line by line, processes each username and group, creates users and groups as needed, and assigns users to groups. 5. **Password Handling**: Secure random passwords are generated and assigned to new users, and all actions are logged. #### Running the Script 1. **Prepare the Input File**: Create a file named `input_file.txt` with the following format: ``` alice;developers,admins bob;developers charlie;admins,users ``` 2. **Make the Script Executable**: ```sh chmod +x user_management.sh ``` 3. **Run the Script**: ```sh sudo ./user_management.sh input_file.txt ``` #### Conclusion This Bash script simplifies user management on Linux systems, ensuring users are created with secure passwords, assigned to appropriate groups, and all actions are logged for audit purposes. By automating these tasks, system administrators can save time and reduce errors. Feel free to customize this script further to suit your specific needs. Happy automating! #### About the Author Isaac Obuor is a seasoned DevOps engineer with extensive experience in automating system administration tasks. Follow Isaac Obuor on [GitHub](https://github.com/Obuorcloud/linux_user_creation.git) for more insightful articles and projects. --- These comprehensive README and article drafts should help you document and share your user management script effectively across different platforms. Start your journey to becoming a world-class developer today!
isaac_obuor_4ec2278316110
1,913,134
Building Stunning Portfolio Websites for Clients in 2024: A Case Study
Creating a captivating, functional, and modern portfolio website is crucial for any professional...
0
2024-07-05T19:35:48
https://dev.to/syedahmedullah14/building-stunning-portfolio-websites-for-clients-in-2024-a-case-study-4cdo
webdev, javascript, programming, nextjs
Creating a captivating, functional, and modern portfolio website is crucial for any professional looking to make an impactful online presence. In 2024, with the advancement of web technologies, the possibilities for creating such websites are endless. Today, I’m excited to share a recent project I completed for a client, showcasing the process, technology stack, and features that made this portfolio website truly stand out. ## The Project Overview I recently completed a cutting-edge portfolio website for a delighted client, which you can view live here. This project involved integrating several advanced technologies and design techniques to deliver a seamless user experience. ![Hero Section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk4yxax7ya2l4l1o19ri.PNG) ## Technology Stack The website was built using a modern tech stack that includes: Next.js: A powerful React framework that offers server-side rendering and static site generation, ensuring fast load times and SEO benefits. Three.js: A JavaScript library that enables the creation of complex 3D graphics in the browser, adding depth and engagement to the site. Framer Motion: A library for creating smooth and interactive animations, making the website visually appealing and engaging. Tailwind CSS: A utility-first CSS framework that allows for rapid and efficient styling, resulting in a consistent and modern design. Sentry: A tool for real-time error tracking and performance monitoring, ensuring a seamless user experience by quickly identifying and resolving issues. ## Key Features The portfolio website boasts several advanced features: ### 1. Hero Section The hero section features a captivating introduction with a spotlight effect and dynamic background, immediately grabbing the visitor's attention. ### 2. Bento Grid A modern layout that presents personal information using cutting-edge CSS design techniques, providing a clean and organized view of the client’s details. ### 3. 3D Elements Interactive 3D design elements, such as a GitHub-style globe and card hover effects, add depth and engagement, making the site stand out from traditional 2D designs. ### 4. Testimonials A dynamic testimonials area with scrolling or animated content enhances engagement and provides social proof of the client’s skills and expertise. ### 5. Work Experience Prominently displays the client’s professional background, emphasizing credibility and experience in an organized and visually appealing manner. ### 6. Canvas Effect Innovative use of HTML5 canvas to create visually striking effects in the "approaches" section, adding a unique and creative touch to the website. ### 7. Responsiveness Seamless adaptability across all devices ensures an optimal viewing experience for every user, regardless of the device they’re using. ### 8. Sentry Integration Implemented Sentry for real-time error tracking and performance monitoring, ensuring any issues are quickly identified and resolved, maintaining a high-quality user experience. ![About section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2uwt0kft6tu0oi7brji.PNG) ![About section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/594f4pwta9rzzhrcl0av.PNG) ![Experience](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/do84tgp253lbxgb03uy2.PNG) ![Approach](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mwranfao7r654kmd5zs.PNG) Why Sentry? Incorporating Sentry into the project has been a game-changer. Here’s why every developer should consider using Sentry or a similar tool: Real-time Error Tracking: Captures and aggregates errors with detailed stack traces, helping identify root causes quickly. Performance Monitoring: Tracks performance issues like slow queries and long load times, pinpointing bottlenecks. User Feedback: Collects direct feedback from users experiencing issues, providing invaluable context for troubleshooting. Alerts and Notifications: Sends alerts via email, Slack, and more, enabling swift responses to issues. Seamless Integration: Works effortlessly with various frameworks and tools like JavaScript, Python, Ruby, and Node.js. ![Sentry popup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7fseu662wrk2q3twr2h.PNG) ### Getting Started with Sentry Integrating Sentry into your project is straightforward. Here’s a quick guide to get you started: 1. Sign Up for Sentry: If you don't have an account, sign up at sentry.io. 2. Install Sentry: For a Next.js project, install the Sentry SDK. `npm install @sentry/nextjs` 3. Initialize Sentry: In your sentry.server.config.js and sentry.client.config.js, initialize Sentry with your DSN. ``` import * as Sentry from "@sentry/nextjs"; Sentry.init({ dsn: "YOUR_SENTRY_DSN", tracesSampleRate: 1.0, }); ``` 4. Capture Errors: Manually capture errors using captureException. ``` try { // Your code here } catch (error) { Sentry.captureException(error); } ``` 5. Monitor Performance: Sentry automatically monitors performance metrics when integrated with Next.js. ** ## Building Portfolio Websites for Clients in 2024 ** In 2024, building portfolio websites for clients involves leveraging the latest technologies and design trends to create engaging, functional, and aesthetically pleasing sites. Here's my approach: Understanding Client Needs: Start with a detailed discussion to understand the client's requirements, goals, and target audience. Choosing the Right Tech Stack: Select a tech stack that offers performance, scalability, and ease of maintenance. For this project, Next.js, Three.js, Framer Motion, and Tailwind CSS were the perfect fit. Design and Prototyping: Create wireframes and prototypes to visualize the layout and flow of the website. Development and Testing: Develop the website with a focus on clean code, performance, and responsiveness. Implement thorough testing to ensure a bug-free experience. Deployment and Monitoring: Deploy the website on a reliable platform (e.g., Netlify) and use tools like Sentry for continuous monitoring and performance tracking. Client Feedback and Iteration: Collect feedback from the client and make necessary adjustments to ensure satisfaction. ### Conclusion This project exemplifies how advanced web technologies and thoughtful design can create a stunning and functional portfolio website. By leveraging tools like Sentry for real-time error tracking and performance monitoring, developers can ensure a seamless user experience. Special thanks to JavaScript Mastery for their guidance and for introducing me to invaluable tools like Aceternity UI and Sentry. If you’re looking to build a modern, minimalist portfolio website that stands out and delivers top-notch performance, feel free to reach out. Let’s create something amazing together! Check out the live site: shareef-shahzer-portfolio.netlify.app Happy coding! 🚀
syedahmedullah14
1,913,133
Tezos Investment Strategies: A Comprehensive Guide
Understanding Tezos Founded in 2014 by Arthur and Kathleen Breitman, Tezos is a...
27,673
2024-07-05T19:34:57
https://dev.to/rapidinnovation/tezos-investment-strategies-a-comprehensive-guide-4f6k
## Understanding Tezos Founded in 2014 by Arthur and Kathleen Breitman, Tezos is a decentralized blockchain platform designed to facilitate the development and use of decentralized applications (dApps) and smart contracts. Despite initial management issues, Tezos gained significant attention in 2017 with its unprecedented initial coin offering (ICO), which raised $232 million in investment. Tezos has shown resilience, overcome obstacles, and made significant progress in its development, solidifying its position as a pioneer in the blockchain ecosystem. ## Key Features and Technology Behind Tezos Tezos has numerous critical aspects that distinguish it from other blockchain platforms: **1\. On-chain Governance Model:** Tezos uses a unique governance system that gives token holders direct control over the platform's progress. This architecture allows token holders to propose and vote on protocol updates, eliminating the need for hard forks. **2\. Liquid Proof-of-Stake (LPoS) Consensus:** Tezos uses LPoS as its consensus mechanism, allowing token holders to actively participate in network security and consensus by staking. Both delegators and validators are rewarded with additional tokens in exchange for their engagement. **3\. Support for Smart Contracts:** Tezos' support for smart contracts offers developers a stable framework for developing decentralised apps (dApps). This provides more security and flexibility, allowing a wide range of applications in fields including finance, gaming, and supply chain management. **4\. Secure and Flexible Framework:** Tezos employs formal verification techniques to mathematically guarantee the validity of smart contract code, thereby lowering the possibility of vulnerabilities and security breaches. ## Tezos as a Smart Contract Platform Tezos provides developers with a strong platform capable of supporting a wide range of decentralised apps (dApps) and smart contracts. Its self-amending process allows the blockchain to evolve and adapt independently over time, eliminating the need for controversial hard forks or external interventions. This intrinsic flexibility makes Tezos an extraordinarily adaptable solution for a wide range of use cases, including decentralised finance (DeFi), supply chain management, digital identity verification, and voting systems. ## Diverse Applications within the Tezos Ecosystem The Tezos ecosystem is distinguished by its broad range of applications across multiple industries. Tezos supports lending, borrowing, and trading in the context of decentralised finance (DeFi), providing users with financial autonomy and transparency. Projects such as Dexter, Kolibri, and Kalamint demonstrate Tezos' adaptability and inventiveness by providing decentralised exchange services, stablecoin solutions, and NFT markets, respectively. ## Investing in Tezos Tezos coins (XTZ) may appeal to investors seeking long-term growth in their portfolios. Tezos distinguishes itself with its self-amending protocol, which allows for seamless updates without hard forks, assuring adaptability to changing market demands. However, it is critical for investors to understand and limit the risks involved with Tezos investments, including market volatility, legislative changes, and network improvements. ## Tezos Investment Strategies **1\. HODLing Tezos:** This means purchasing XTZ tokens with the goal of retaining them for an extended period of time, regardless of any transitory price swings. This strategy indicates trust in Tezos' underlying technology and its potential to become a major player in the digital asset ecosystem. **2\. Staking Tezos:** This entails investors storing their XTZ tokens in a wallet to help the network run. Stakeholders receive additional XTZ tokens in exchange for their contributions to the network's security and stability. **3\. Trading Tezos:** This entails actively purchasing and selling XTZ tokens on cryptocurrency exchanges to profit from short-term price volatility. Traders must use risk management tactics and stay current on market movements to increase their chances of success. **4\. Diversifying Tezos Investments:** This includes directing funds not only to Tezos but also to a range of other assets. Diversification allows investors to reduce the impact of bad events on a single investment, increasing the overall resilience of their portfolio. ## Tezos Wallets and Security Choosing a secure Tezos wallet is critical for keeping your digital assets safe from potential threats. Hardware wallets like Ledger and software wallets like Galleon are popular choices due to their dependability and security features. Implementing strong security measures, such as creating strong, unique passwords and enabling two-factor authentication (2FA), is essential. ## Future Outlook and Potential Developments Tezos continues to evolve with regular protocol upgrades; therefore, investors must stay updated about upcoming advancements. Despite limitations, Tezos has enormous potential to alter the future of digital wealth creation, providing investors with a compelling path to financial development and prosperity. ## Conclusion Tezos offers a compelling possibility for anyone wishing to build wealth in the dynamic blockchain environment. To profit from this potential, investors must first understand Tezos' key concepts, explore its numerous investing techniques, and prioritise security precautions. By taking a proactive approach and knowing the basics, investors may confidently handle Tezos investments and position themselves for financial success. Tezos continues to innovate and evolve, establishing itself as a formidable participant in the digital asset environment, prepared to influence the future of finance with its diverse capabilities and great growth prospects. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <http://www.rapidinnovation.io/post/a-path-to-digital-wealth-creation-with-tezos-investment-strategies> ## Hashtags #BlockchainTechnology #TezosInvestment #Cryptocurrency #SmartContracts #DigitalAssets
rapidinnovation
1,913,131
The Changing Environment: A Critical Challenge for Today's World
The environment, our planet's life support system, is undergoing profound changes due to a variety of...
0
2024-07-05T19:31:40
https://dev.to/elon01/the-changing-environment-a-critical-challenge-for-todays-world-41gd
The environment, our planet's life support system, is undergoing profound changes due to a variety of natural and human-induced factors. The impacts of these changes are far-reaching, affecting ecosystems, biodiversity, climate patterns, and human societies. Understanding and addressing these changes is crucial for ensuring a sustainable future for all life on Earth. This article delves into the significant aspects of environmental change in today's world, highlighting key issues, their causes, and potential solutions. **Climate Change: The Driving Force** One of the most pressing aspects of **_[environmental ](https://en.wikipedia.org/wiki/Environment)_**change is climate change, primarily driven by human activities such as burning fossil fuels, deforestation, and industrial processes. The Intergovernmental Panel on Climate Change (IPCC) reports that global temperatures have risen by approximately 1.2°C above pre-industrial levels, with significant implications for weather patterns, sea levels, and natural disasters. The increase in greenhouse gas emissions, particularly carbon dioxide and methane, traps heat in the atmosphere, leading to global warming. This warming contributes to more frequent and severe weather events such as hurricanes, droughts, and heatwaves. For instance, the 2020 Atlantic hurricane season was one of the most active on record, with 30 named storms, including 13 hurricanes. **Biodiversity Loss: A Crisis of Extinction** Alongside climate change, biodiversity loss represents a critical challenge. Human activities such as habitat destruction, pollution, overfishing, and introduction of invasive species have accelerated the rate of species extinction. The World Wildlife Fund (WWF) estimates that populations of mammals, birds, fish, reptiles, and amphibians have declined by an average of 68% since 1970. Biodiversity is essential for ecosystem stability, providing services such as pollination, nutrient cycling, and disease regulation. The loss of species can disrupt these services, leading to ecosystem collapse and reduced resilience against environmental changes. For example, the decline of bee populations worldwide threatens pollination services, which are vital for food production. **Pollution: The Ubiquitous Threat** Pollution in its many forms—air, water, soil, and plastic—is another significant factor contributing to environmental change. Air pollution, primarily from industrial emissions and vehicle exhaust, poses severe health risks and contributes to climate change. The World Health Organization (WHO) states that 90% of the world’s population breathes air containing high levels of pollutants, leading to respiratory and cardiovascular diseases. Water pollution from agricultural runoff, industrial discharges, and plastic waste contaminates drinking water sources, harms marine life, and disrupts aquatic ecosystems. The Great Pacific Garbage Patch, a massive collection of marine debris, exemplifies the pervasive issue of plastic pollution. Microplastics, tiny plastic particles, have been found in the bodies of marine organisms, posing risks to food safety and human health. **Deforestation: The Loss of Forests** Deforestation, driven by agriculture, logging, and infrastructure development, is a major contributor to environmental change. Forests play a crucial role in regulating the climate, supporting biodiversity, and providing livelihoods for millions of people. The Food and Agriculture Organization (FAO) reports that the world lost 10 million hectares of forest per year from 2015 to 2020. The Amazon rainforest, often referred to as the “lungs of the Earth,” has been severely affected by deforestation. The loss of forests not only releases stored carbon dioxide into the atmosphere, exacerbating climate change, but also reduces the planet's capacity to absorb future emissions. Moreover, deforestation disrupts indigenous communities who depend on forests for their cultural and physical survival. **Ocean Changes: Rising Seas and Acidification** The world’s oceans are undergoing significant changes due to climate change and human activities. Rising sea levels, caused by the thermal expansion of seawater and the melting of polar ice caps, threaten coastal communities and ecosystems. The National Oceanic and Atmospheric Administration (NOAA) projects that global sea levels could rise by up to 2.5 meters by the end of this century, displacing millions of people and submerging valuable habitats. Ocean acidification, resulting from the absorption of excess carbon dioxide by seawater, poses a threat to marine life, particularly organisms with calcium carbonate shells or skeletons, such as corals and shellfish. Coral reefs, which support a vast array of marine biodiversity and provide critical ecosystem services, are at risk of bleaching and degradation due to warming and acidifying oceans. **Solutions and Mitigation Strategies** Addressing the multifaceted challenges of environmental change requires a comprehensive and coordinated approach involving governments, businesses, and individuals. Key strategies include: Transitioning to Renewable Energy: Shifting from fossil fuels to renewable energy sources such as solar, wind, and hydroelectric power can significantly reduce greenhouse gas emissions. Investment in clean energy technologies and infrastructure is essential for a sustainable energy transition. **Conservation and Restoration**: Protecting existing natural habitats and restoring degraded ecosystems can enhance biodiversity and ecosystem services. Initiatives like reforestation, wetland restoration, and the establishment of protected areas are critical for conservation efforts. Sustainable Agriculture and Fishing: Adopting sustainable practices in agriculture and fisheries can reduce environmental impacts and ensure the long-term viability of food production. Techniques such as organic farming, agroforestry, and sustainable fishing quotas can mitigate biodiversity loss and pollution. **Reducing Waste and Pollution**: Implementing policies to reduce waste generation, promote recycling, and control pollution is vital. Banning single-use plastics, improving waste management systems, and reducing emissions from industrial sources can help mitigate pollution. Climate Adaptation and Resilience: Building resilience to climate impacts through adaptation measures is crucial for vulnerable communities. Infrastructure improvements, disaster preparedness, and ecosystem-based adaptation can reduce the risks associated with climate change. International Cooperation: Environmental issues are global challenges that require international cooperation and agreements. Treaties such as the Paris Agreement aim to unite countries in the fight against climate change by setting emission reduction targets and promoting sustainable development. **Conclusion** Environmental change poses a significant threat to the planet's future, but it also presents an opportunity for transformative action. By understanding the causes and consequences of these changes, and by implementing effective strategies, we can work towards a more sustainable and resilient world. The responsibility lies with all of us—governments, businesses, and individuals—to make informed choices and take decisive action to protect our environment for generations to come.
elon01
1,913,128
Install GO in UBUNTU machines
Download Source Download source from this link Go Binaries Unpack...
0
2024-07-05T19:23:36
https://dev.to/ahmed_abir/install-go-in-ubuntu-machines-4a5m
installgo, setupgo, goforebpf, go
## Download Source Download source from this link [Go Binaries](https://go.dev/dl/) ## Unpack Packages After downloading the binary file may be in your `~/Downloads` folder from terminal use this command ``` cd Downloads ``` Then list the items in the directory by this command ``` ls -l | grep go ``` ![go-binary-find](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmpuosioe0ju97xqpubo.png) Extract the binary files from the `go1.22.2.linux-amd64.tar.gz` file in the download folder (you may have download the another version) use this command ``` sudo tar -C /usr/local -xzf go1.22.2.linux-amd64.tar.gz ``` This command will extract the file into `/usr/local` folder ## Add to PATH variable Where the `go binaries` are you need to find it and then the path should be added to you path variable otherwise you will encounter some problems like `go command not found` by this command will be able to locate the `go binary` where it is ``` whereis go ``` ![whereis-go](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wdta72y62mg6n58433xa.png) Now copy the path and use below command ``` export PATH=$PATH:{your_copied_path_here}/bin ``` ## Verify Installation Now use below command to verify your Installation ``` go version ``` ![verify-installation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59e2upgoaintn8reoz2x.png)
ahmed_abir
1,913,127
Automating Linux User Management and Permissions with Bash Scripting
In this article, I'll be walking through the steps to create a user management bash script to meet...
0
2024-07-05T19:19:43
https://dev.to/vinsu/automating-linux-user-management-and-permissions-with-bash-scripting-24k9
In this article, I'll be walking through the steps to create a user management bash script to meet some predefined requirements. The main goal of the script is to allow an administrator to create users and groups for different purposes within a linux environment, so without plenty talk lets get into it ## Requirements 1. Users and their groups are defined in a text file that will be supplied to the script as an argument 2. A corresponding home directory will be created for each user 3. User passwords should be stored securely in a file with path /car/secure/user_passwords.txt 4. Logs of all actions should be logged to /var/log/user_management.log 5. Only the owner, in this case root, should be able to access the user_password.txt file 6. Errors should be gracefully handled ## Creating users To create a user, you can use the useradd command. This command can be set to create a user, create their home directory, and set their password. If you simply wish to add a user to the linux system, you can run: ``` sudo useradd <username> ``` `<username>` here is the name of the user you wish to add. However, if you wish to create a home directory and add a password, you can run this instead: ``` sudo useradd -m -p $(openssl passwd -6 "$password") <username> ``` This command uses the `-m` flag to add a home directory and the `-p` flag to add a password (encrypted using openssl) for the user. ## Adding users to groups By default, when a user is created, a personal group with their username is also created. This means you won't need to explicitly create this. However, to add a user to group, say sudo, you must use the usermod command. Find the basic command structure below: ``` sudo usermod -aG "<group>" "<user>" ``` The -a flag is used to append the user to the new group without removing them from existing groups. The -G flag on the other hand specifies the group that the user will be added to, in this case, . ## Creating groups When a group doesn't exist, it should be created before a user is added to it. Groups are typically created using the groupadd command. Here's an example of the command in action: ``` sudo groupadd "<group>" ``` ## Combining user creation, group creation, and group addition You can command user creation, group creation, and adding a user to a group in a script. Say you have your users and groups defined in a semi-colon delimited script like this: ``` user1; group1, group2 user2; group3,group6 user3;group2,group3 ``` You can write a script that loops through the file, extracts the relevant information, and creates the users and groups. ``` USERS_FILE=$1 mapfile -t lines < "$USERS_FILE" # loop over each line in the array for line in "${lines[@]}"; do # Remove leading and trailing whitespaces line=$(echo "$line" | xargs) # Split line by ';' and store the second part IFS=';' read -r user groups <<< "$line" # Remove leading and trailing whitespaces from the second part groups=$(echo "$groups" | xargs) # Create a variable groupsArray that is an array from spliting the groups of each user IFS=',' read -ra groupsArray <<< "$groups" # Generate a 6-character password using pwgen password=$(pwgen -sBv1 6 1) # Create the user with the generated password sudo useradd -m -p $(openssl passwd -6 "$password") "$user" # loop over each group in the groups array for group in "${groupsArray[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not, create it if ! grep -q "^$group:" /etc/group; then sudo groupadd "$group" echo "Created group $group" fi # Add user to the group sudo usermod -aG "$group" "$user" echo "Added $user to $group" done echo "User $user created and added to appropriate groups" done ``` Now, the script above does the following: 1. It takes in a single argument, expressed using $1. It then sets this argument as the variable, $USERS_FILE. 2. It uses the mapfile command to load the content of the $USERS_FILE into an array called lines. 3. It loops through each lines of lines and extracts the user and groups using the Internal Field Separator (IFS) shell command. 4. It generates a 6-character password using pwgen. pwgen is linux package that allows you to create passwords to your exact specification. 5. It loops over the groups, after splitting each group into groups using IFS, creates the group if it doesn't exist, and adds the user to the group. ## Securing the script: Hashing passwords with openssl While the script above performs all the operations needed to create users and groups, and then add the users to groups, it does not consider security. The major security issue is that the generated passwords are added to users in plaintext format. To solve this problem, you can utilize openssl to hash the password. You can simply run openssl passwd -6 (generated_password) to achieve hashing. This command uses the SHA 512 algorithm for hashing. It's security is comparable to SHA 256 which is the most prominent hashing algorithm on the internet. ## Encrypting and storing the passwords Since this script creates users, it is wise to capture the generated passwords in a file. But to do that securely, the passwords must be encrypted. Password encryption can also be done using openssl. But it'll require and encryption key. You can use the command below to generate, encrypt, and store a password. ``` # Generate a 6-character password using pwgen password=$(pwgen -sBv1 6 1) # Encrypt the password before storing it encrypted_password=$(encrypt_password "$password" "$PASSWORD_ENCRYPTION_KEY") # Store the encrypted password in the file echo "$user:$encrypted_password" >> "$PASSWORD_FILE" ``` The `$PASSWORD_ENCRYPTION_KEY` and `$PASSWORD_FILE` must be defined for this operation to complete successfully. A look at the secure script Here's the updated script with the password encryption and password hashing functionalities: ``` #!/bin/bash PASSWORD_FILE_DIRECTORY="/var/secure" PASSWORD_FILE="/var/secure/user_passwords.txt" PASSWORD_ENCRYPTION_KEY="secure-all-things" USERS_FILE=$1 # Function to encrypt password encrypt_password() { echo "$1" | openssl enc -aes-256-cbc -pbkdf2 -base64 -pass pass:"$2" } # Create the directory where the user's password file will be stored sudo mkdir -p "$PASSWORD_FILE_DIRECTORY" sudo touch "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" # Set read permission for only the owner of the file sudo chown root:root "$PASSWORD_FILE" # Set the owner as the root user # load the content of the users.txt file into an array: lines mapfile -t lines < "$USERS_FILE" # loop over each line in the array for line in "${lines[@]}"; do # Remove leading and trailing whitespaces line=$(echo "$line" | xargs) # Split line by ';' and store the second part IFS=';' read -r user groups <<< "$line" # Remove leading and trailing whitespaces from the second part groups=$(echo "$groups" | xargs) # Create a variable groupsArray that is an array from spliting the groups of each user IFS=',' read -ra groupsArray <<< "$groups" # Generate a 6-character password using pwgen password=$(pwgen -sBv1 6 1) # Encrypt the password before storing it encrypted_password=$(encrypt_password "$password" "$PASSWORD_ENCRYPTION_KEY") # Store the encrypted password in the file echo "$user:$encrypted_password" >> "$PASSWORD_FILE" # Create the user with the generated password sudo useradd -m -p $(openssl passwd -6 "$password") "$user" # loop over each group in the groups array for group in "${groupsArray[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not, create it if ! grep -q "^$group:" /etc/group; then sudo groupadd "$group" echo "Created group $group" fi # Add user to the group sudo usermod -aG "$group" "$user" echo "Added $user to $group" done echo "User $user created and password stored securely" done # remove the created password from the current shell session unset password ``` The script above includes the additional functionality as preventing non-root users from accessing the password storage file and also removing the password variable using unset password from the shell where it is run. Adding logging to the script The script can be further improved by logging the commands to a log file. This file can be defined as a variable at the top of the script then a redirection command can be added to redirect logs from the script to the log file. We can also direct errors that might occur to the std out, that's the normal output you see when you run commands without errors. Both log types will ultimately be sent to the log file. The command below illustrates this: ``` # Redirect stdout and stderr to log file exec > >(tee -a "$LOG_FILE") 2>&1 echo "Executing script... (note that this line will be logged twice)" | tee -a $LOG_FILE ``` The echo "Executing script..." command is added so that the normal console shows the logs too. It's not wise to run a script without seeing an output. The addition of this line will ultimately mean it gets shown in the log file twice, but this is the compromise that has to be made. Adding Error Handling Errors can be handled and prevented using exception handling. We can add functions that check that both openssl and pwgen are installed, otherwise installs them. When can also add handlers that check if arguments are not passed to the script and if the argument passed for the user's file is a valid file. Here's a snippet with these exception handlers: ``` #!/bin/bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE_DIRECTORY="/var/secure" PASSWORD_FILE="/var/secure/user_passwords.txt" PASSWORD_ENCRYPTION_KEY="secure-all-things" USERS_FILE=$1 # Function to display usage information usage() { echo "Usage: $0 <user-data-file-path>" echo " <user-data-file-path>: Path to the file containing user data." echo echo "The user data file should contain lines in the following format:" echo " username;group1,group2,..." echo echo "Example:" echo " light; dev,sudo" echo " mayowa; www-data, admin" exit 1 } # Check if script is run with sudo if [ "$(id -u)" != "0" ]; then echo "This script must be run with sudo. Exiting..." exit 1 fi # Check if an argument was provided if [ $# -eq 0 ]; then echo "Error: No file path provided." usage fi # Check if the user's data file exists if [ ! -e "$USERS_FILE" ]; then echo "Error: The provided user's data file does not exist: $USERS_FILE" usage fi # Function to check if a package is installed is_package_installed() { dpkg -s "$1" >/dev/null 2>&1 } # Check if openssl is installed if ! is_package_installed openssl; then echo "openssl is not installed. Installing..." sudo apt-get update sudo apt-get install -y openssl fi # Check if pwgen is installed if ! is_package_installed pwgen; then echo "pwgen is not installed. Installing..." sudo apt-get update sudo apt-get install -y pwgen fi # Check if the file exists if [ ! -f "$USERS_FILE" ]; then echo "Error: $USERS_FILE not found." exit 1 fi ``` An exception is also added that checks if the script was run using the sudo command. This is because sudo is required to perform useradd and groupadd operations. ## Conclusion You can find the complete script in this [GitHub repository](https://github.com/vinuch/bash-user-management). A big thanks to [HNG](https://hng.tech) for providing this chance to dive into advanced bash scripting through a practical example. To join an upcoming HNG internship, keep an eye on [their internship page](https://hng.tech/internship). You can also hire top talent for your project through the HNG network by visiting [HNG Hire](https://hng.tech/hire).
vinsu
1,913,125
How to setup Host for learning eBPF and XDP
Install Necessary Packages First update your linux package index sudo apt-get...
0
2024-07-05T19:15:22
https://dev.to/ahmed_abir/how-to-setup-host-for-learning-ebpf-and-xdp-150
ebpfsetup, xdpsetup, bpftool
## Install Necessary Packages - First update your linux package index ``` sudo apt-get update ``` Then install the following packages : - `clang` compiler for compiling eBPF programs ``` sudo apt-get install -y clang ``` - `llvm` provides libraries and tools for manipulating intermediate code, used by clang. ``` sudo apt-get install -y llvm ``` - `libelf-dev` helps in working with **ELF** (Executable and Linkable Format) files, which are used for eBPF bytecode. ``` sudo apt-get install -y libelf-dev ``` - `libbpf-dev` library for loading and interacting with eBPF programs. ``` sudo apt-get install -y libbpf-dev ``` - `libpcap-dev` provides functions for network packet capture, useful for testing and debugging. ``` sudo apt-get install -y libpcap-dev ``` - `gcc-multilib` allows compiling programs for both 32-bits and 64-bits architechtures. ``` sudo apt-get install -y gcc-multilib ``` - `build-essential` package that includes essential tools for compiling software (like gcc,make). ``` sudo apt-get install -y build-essential ``` - `linux-tools-common` provides common tools for kernel developers. ``` sudo apt-get install -y linux-tools-common ``` - `linux-headers-$(uname -r)` kernel headers specific to your sudoning kernel version, needed for compiling kernel modules. ``` sudo apt-get install -y linux-headers-$(uname -r) ``` - `linux-tools-$(uname -r)` tools specific to your sudoning kernel version, useful for performance monitoring. ``` sudo apt-get install -y linux-tools-$(uname -r) ``` - `linux-headers-generic` generic kernel headers, useful for compiling kernel modules across different kernel versions. ``` sudo apt-get install -y linux-headers-generic ``` - `linux-tools-generic` generic tools for various kernel versions, useful for performance and debugging. ``` sudo apt-get install -y linux-tools-generic ``` - `iproute2` collection of utilities for network configuration and management. ``` sudo apt-get install -y iproute2 ``` - `iputils-ping` provides the ping utility for testing network connectivity. ``` sudo apt-get install -y iputils-ping ``` - `dwarves` contains tools like `pahole` to inspect the structure of compiled programs. ``` sudo apt-get install -y dwarves ``` - `tcpdump` a packet analyzer that allows you to capture and display network packets. ``` sudo apt-get install -y tcpdump ``` - `bind9-dnsutils` provides tools for **DNS** querying and testing. (nslookup, dig like tools) ``` sudo apt-get install -y bind9-dnsutils ``` ## Install All ``` sudo apt-get update sudo apt-get install -y clang llvm libelf-dev libbpf-dev libpcap-dev gcc-multilib build-essential linux-tools-common sudo apt-get install -y linux-headers-$(uname -r) linux-tools-$(uname -r) linux-headers-generic linux-tools-generic sudo apt-get install -y iproute2 iputils-ping dwarves tcpdump bind9-dnsutils ```
ahmed_abir
1,913,124
Containers vs. Virtual Machines: A Beginner’s Journey
Containers: A containers is an isolated environments for running an application. A Docker container...
0
2024-07-05T19:10:47
https://dev.to/rakibtweets/containers-vs-virtual-machines-a-new-developers-journey-41lb
docker, container, dockerimage, virtualmachine
**Containers:** A containers is an isolated environments for running an application. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. **Pros of Containers:** - **Lightweight:** Containers share the host OS, making them much lighter and faster to start than VMs. - **Efficiency:** They use fewer resources, allowing more containers to run on the same hardware compared to VMs. - **Scalability:** Ideal for microservices and scaling applications quickly. **Cons of Containers:** - **Shared Kernel:** Since containers share the host OS kernel, they offer less isolation than VMs. - **Compatibility:** Containers are limited to the OS of the host system. **Virtual Machine(VM): VM** is a abstraction of machines (physical hardware). A virtual machine is a digital copy of a physical machine. You can have multiple virtual machines with their own individual operating systems running on the same host operating system. **Pros of Virtual Machines:** - **Isolation:** VMs offer strong isolation, as each VM runs its own OS. - **Versatility:** They can run different OSes on the same physical hardware. - **Security:** Enhanced security due to complete OS separation. **Cons of Virtual Machines:** - **Resource Intensive:** VMs are heavy, consuming significant CPU, memory, and storage resources. - **Slower Startup:** VMs take longer to boot up due to their complete OS stack. ### **When to Use What?** - **Use VMs if:** You need strong isolation for security purposes, need to run multiple different OSes, or are dealing with legacy applications that require a full OS environment. - **Use Containers if:** You’re developing microservices, need rapid scaling, or want to maximize resource efficiency. ## Different between Containers and Virtual Machines ### Containers 1. A containers is an isolated environments for running an application. 2. Allow running multiple apps in isolation. 3. More lightweight. 4. Use "OS" of the host. 5. Need less hardware resources. 6. containers does not container full-blown operating system. ### Virtual Machines(VM) 1. VM is a abstraction of machines (physical hardware). 2. on same physical machine we can have two different machine, where each running completely different application and each application has exact dependencies. 3. More heavyweight 4. Slow to start (Entire apps needs to be loaded). 5. Resource intensive (Each machine takes actual hardware. resources). 6. Each VM needs a full-blown OS. ![container-vs-virtualMachines](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgvu1tljyg77desg1sfa.png)
rakibtweets
1,913,123
3 Livros que eu acho que todo dev deveria ler pelo menos 1 vez
1) Sicp O sicp (Structure and Interpretation of Computer Programs) traduzido do inglês-Estrutura e...
0
2024-07-05T19:10:01
https://dev.to/brunociccarino/3-livros-que-eu-acho-que-todo-dev-deveria-ler-pelo-menos-1-vez-10go
livros, development, learning, productivity
1) Sicp ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/isfuvc65yy7ygeqnsjty.jpg) O sicp (Structure and Interpretation of Computer Programs) traduzido do inglês-Estrutura e Interpretação de Programas de Computador é um livro de ciência da computação clássico dos professores do MIT da metade década de 80 até o final da década de 90 para ensinar ciências da computação, que mais tarde se tornou um programa oficial do MIT no curso de ciências da computação usando uma linguagem é considerada uma das mais simples do mundo (scheme um dialeto lisp tipo 2) para ensinar conceitos dos mais básicos até os mais avançados, para vocês terem uma noção no final do curso de ciências da computação os alunos tinham fazer um projeto que era escrever um interpretador para alguma determinada linguagem. Tenho certeza que depois que você ler esse livro você vai se tornar um programador muito melhor não só escrevendo códigos mas como um conhecedor da area da tecnologia. 2) O programador pragmático ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hyn21m328xs0euotjg8.png) A filosofia do livro o programador pragmático te faz refletir sobre suas ações, te faz começar a assumir responsabilidade pelos seus erros e te faz entender que tudo bem não saber tudo sobre um determinado assunto, ler o programador pragmático te faz não só um programador melhor, mas uma pessoa melhor. Considero esse livro uma leitura obrigatória para todos que querem se tornar um programador sagaz, que entende completamente seu papel dentro do seu time ajudando a tomar decisões embasadas e fazendo o seu produto ser o mais eficiente possível. Vou deixar duas quotes do livro aqui para vocês: "Aprenda a não culpar alguém ou dar desculpas. Não ponha a culpa de todos os problemas em um fornecedor, uma linguagem de programação, na gerência ou em seus colaboradores. Qualquer um ou todos eles podem ter uma parcela de culpa, mas cabe a você fornecer soluções e não desculpas." um outro trecho muito bom e que esta logo no inicio do livro é "Se havia o risco de ocorrer algo que o fornecedor não poderia resolver, você deveria ter um plano de contingência. Se o disco ficar danificado – e todo o seu código-fonte também ficar comprometido – e você não tiver um backup, a falha é sua. Dizer a seu chefe “o gato comeu meu código-fonte” não resolverá." 3) Entendendo algoritmos ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jr0e78lol4n72dbih3mw.jpg) O livro entendendo algoritmos é um livro tanto para quem esta começando quanto para quem ja é programador a algum tempo, explica de forma simples como funcionam os algoritmos, o que eles são tirando aquela ideia que algoritmos são o bicho de sete cabeças que vocês provavelmente viram no curso de ciências da computação de vocês. Depois que vocês lerem esse livro tenho certeza que vão começar a resolver aquele problema de uma forma mais simples, a pensar de uma forma mais simples e lógica.
brunociccarino
1,912,052
Building Cryptpay: A Blockchain-Powered Stablecoin Payment Infrastructure Web API
In 2023, I started building Cryptpay after discovering Lazerpay, a blockchain-powered payment...
0
2024-07-05T19:08:28
https://dev.to/simeon2001/building-cryptpay-a-blockchain-powered-stablecoin-payment-infrastructure-web-api-k20
blockchain, django, python, web3
In 2023, I started building Cryptpay after discovering Lazerpay, a blockchain-powered payment infrastructure created by Emmanuel Njoku. Lazerpay caught my attention with its innovative web API, and I quickly fell in love with the product. Inspired by Lazerpay, I decided to develop something similar and add it to the projects I've worked on. Although Lazerpay has since shut down, I had already built Cryptpay, albeit in an unstable form, on the Binance Smart Chain testnet. Blockchain technology allows people to work from anywhere globally, get paid instantly for their services, pay business partners globally, purchase raw materials, and buy products. One prominent advantage of blockchain is its speed (if there is no congestion) and eliminating middlemen who charge large transaction fees. According to a report by Juniper Research, blockchain deployments will enable banks to save up to $27 billion on cross-border settlement transactions by the end of 2030, reducing costs by more than 11%. Ethereum, in particular, has demonstrated disruptive economics, creating over 10x cost advantages against incumbent technologies. Financial institutions acknowledge that distributed ledger technology will save billions of dollars for banks and major financial institutions over the next decade. With the rise of blockchain, numerous cryptocurrencies have been built and used for payment. However, there are significant UX challenges for newcomers to crypto. Cryptpay aims to simplify integrating crypto payments into various applications, such as e-commerce web apps. This article will guide you through designing a blockchain-powered payment infrastructure web API, allowing users to accept stablecoins like USDT, DAI, BUSD, etc., for their business payments or donations. We will delve into understanding blockchain-powered payment infrastructure web APIs on a technical level and explain how to build your own easily. --- ## Prerequisites This article requires knowing a programming language and a basic understanding of some technical concepts which is beneficial—for example: - **Blockchain** - [Accounts](https://ethereum.org/en/developers/docs/accounts/) - [Gas and Gas Fee](https://ethereum.org/en/developers/docs/gas/) - [Node and Clients](https://ethereum.org/en/developers/docs/nodes-and-clients/) - [Smart-Contracts](https://ethereum.org/en/developers/docs/smart-contracts/) - [Transactions](https://ethereum.org/en/developers/docs/transactions/) - [HD-Wallet](https://weteachblockchain.org/courses/bitcoin-for-developers/3/hd-wallets) - **Database** - **Task-Scheduler** ## Blockchain The blockchain can be compared to an Excel sheet on a network of computers. When a row is modified or created, the change is reflected on all computers connected to the network. Blockchain is a distributed ledger with a growing list of records that are securely linked together via cryptographic hashes ([Wikipedia](https://en.wikipedia.org/wiki/Blockchain)). ### Types of Blockchain There are different types of blockchain: - **Public Blockchains**: Open to anyone and completely decentralized (e.g., Bitcoin, Ethereum). - **Private Blockchains**: Restricted to specific participants and usually managed by a single entity (e.g., Hyperledger). - **Consortium Blockchains**: Semi-decentralized and governed by a group of organizations (e.g., R3 Corda). - **Hybrid Blockchains**: Combine features of both public and private blockchains to provide more flexibility (e.g., Dragonchain). ### Blockchain for Payments Blockchain can be used for various purposes, including payments. For payments to happen on a blockchain, you must have a sender and a receiver. The process involves the following steps: 1. The sender signs a transaction to send a certain amount to the receiver. 2. The transaction is sent to a blockchain network. 3. One of the computers in the blockchain network picks up the transaction and checks its authenticity. 4. The transaction is then passed to other computers (nodes) in the network. 5. Each computer updates its ledger or sheet. 6. Any node connected to the receiver's wallet notifies them of the money sent. ## Smart Contract and Stable Coin Smart contracts are self-executing contracts with the terms of the agreement directly written into code. These contracts automatically execute actions when specific conditions are met. The actions could include releasing funds, registering assets, sending notifications, or issuing tickets. Once executed, the transaction is recorded on the blockchain, ensuring immutability and transparency, with visibility limited to authorized parties. ### Example of a Simple Smart Contract Below is a simple smart contract that stores and retrieves data on the blockchain: ```solidity // SPDX-License-Identifier: GPL-3.0 pragma solidity >=0.4.16 <0.9.0; contract SimpleStorage { uint storedData; function set(uint x) public { storedData = x; } function get() public view returns (uint) { return storedData; } } ``` When this contract is compiled, there are two items of interest: 1. **ABI (Application Binary Interface)**: The ABI is a JSON string that describes the makeup of the contract – the functions as well as the parameter types of each function. The ABI looks like the following: ```json { "inputs": [], "name": "get", "outputs": [ { "internalType": "uint256", "name": "", "type": "uint256" } ], "stateMutability": "view", "type": "function" }, { "inputs": [ { "internalType": "uint256", "name": "x", "type": "uint256" } ], "name": "set", "outputs": [], "stateMutability": "nonpayable", "type": "function" } ``` 2. **Bytecode**: When a contract is compiled, it is translated into opcodes (low-level machine instructions) and their hexadecimal equivalents. The bytecode is a collection of these hexadecimal representations. ``` 608060405234801561000f575f80fd5b5060043610610034575f3560e01c806360fe47b114610 0385780636d4ce63c14610054575b5f80fd5b610052600480360381019061004d91906100ba56 5b610072565b005b61005c61007b565b60405161006991906100f4565b60405180910390f35b8 05f8190555050565b5f8054905090565b5f80fd5b5f819050919050565b61009981610087565b 81146100a3575f80fd5b50565b5f813590506100b481610090565b92915050565b5f602082840 312156100cf576100ce610083565b5b5f6100dc848285016100a6565b91505092915050565b61 00ee81610087565b82525050565b5f6020820190506101075f8301846100e5565b9291505056f ea26469706673582212201fb862b86b277f641e1d608fcae310c4290ce834fd4b00faa33f1b0b 6a4c911664736f6c634300081a0033 ``` ### Stablecoins Stablecoins are a type of cryptocurrency that seeks to maintain a stable value by pegging their market value to an external reference. This reference could be a fiat currency like the U.S. dollar, a commodity such as gold, or another financial instrument. Examples of stablecoins include USDT, DAI, and USDC, which are pegged against the US Dollar. One of the most common ways to create a stablecoin is through the use of smart contracts using the ERC-20 standard, which stands for Ethereum Request for Comments 20. This technical standard is used for smart contracts on the Ethereum blockchain. Many stablecoins, such as USDT, DAI, and USDC, are built using the ERC-20 standard, making it easier for them to be traded on decentralized exchanges and interact with other tokens and smart contracts on the Ethereum network. ## Architecture of Stablecoin Blockchain Payment System ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpslpa5fpb53w8pgku1k.png) The basic way Cryptpay works is: 1. **Account Creation**: When Mr. Paul creates an account, a public key and an API key are generated. - The **public key** can be used by anonymous users to generate an address to pay Mr. Paul. - The **API key** can be used by Mr. Paul personally to generate an address. 2. **Generating an Address**: To perform a request to generate an address, you need: - An email address - A name - The amount you want to send to Mr. Paul - The type of stablecoin: USDT, DAI, USDC 3. **Checking Transactions**: - You can copy the address or reference ID and pass it to the check transaction endpoint to verify if stablecoins have been deposited. - Authenticated users, like Mr. Paul, can check their total account balance by calling the balance endpoint. ### Technical Details Cryptpay is built with Django, a Python web framework, specifically using the Django Rest Framework (DRF) to build web APIs. #### Worker Services The worker services handle asynchronous tasks such as: - Sending OTPs to new users to verify their email. - Monitoring generated addresses every minute to check if they have been funded with stablecoins, updating the database with the deposited amount, and deleting the schedule created for the address. - Sending email notifications to both the sender and receiver when stablecoins are deposited or transferred. - Transferring stablecoins to the respective user address. These tasks are scheduled and distributed using [**Django Q**](https://django-q2.readthedocs.io/) (a Django task scheduler), which coordinates tasks among worker nodes. The system uses a Redis database and connects to blockchain nodes to perform the following functions: - Sending BNB to generated addresses whenever stablecoins are deposited in those addresses. - Transferring stablecoins from generated addresses to Cryptpay's vault address. - Sending stablecoins to respective customer addresses when they request withdrawals. #### External Blockchain Data API Cryptpay uses an external Blockchain Data API([**covalent API**](https://www.covalenthq.com)) to query generated addresses and check if any transactions have occurred. This API helps in tracking deposits and managing fund transfers securely and efficiently. By leveraging Django, Django Q, Redis, and blockchain nodes, Cryptpay provides a robust and scalable solution for managing stablecoin transactions. The architecture ensures secure, real-time processing of payments and notifications, making it a reliable choice for blockchain-based payment systems. ### What Happens When You Signup? When you create an account by sending request data to the signup API endpoint: ```json { "name": "John Doe", "email": "[email protected]", "password": "********" } ``` 1. **Email Schedule Creation**: An email schedule is created to send a One-Time Password (OTP) to verify the account. The OTP code is generated using the random module in Python. ```python # schedule.py funcs = "account.emailtoken.send_token" arg_mail = instance.email arg_name = instance.name print(arg_name, arg_mail) save_email = Schedule.objects.create( name=instance.email, func=funcs, args=f"'{arg_mail}', '{arg_name}'", schedule_type=Schedule.ONCE, ) # emailtoken.py import random import math rand_value = math.floor(random.random() * 12344xxx) otp = int(rand_value[0:5]) def send_token(email, name): pass ``` 2. **Balance Initialization**: Rows are bulk created to store each stablecoin balance per user, with the default value set to 0. ```markdown Table_name = UserToken | user_id | coin_name | balance | time | |---------|-----------|---------|--------------| | 1 | USDT | 34500 | 2024-06-30 | | 1 | DAI | 78000 | 2024-06-30 | | 1 | BUSD | 15000 | 2024-06-30 | ``` - **user_id**: The ID of the user (e.g., 1). - **coin_name**: The type of stablecoin (e.g., USDT, DAI, BUSD). - **balance**: The balance of the stablecoin for the user, multiplied by 100. - **time**: The timestamp when the balance was recorded. To update the balance, for example, in USDT, the value to be added or subtracted from the database value is multiplied by 100 to convert any decimal value to a whole number. This is similar to converting from dollars to cents. ```python value_inputted = 20.5 # 20.5 dollars balance = 34500 # equivalent to 345 dollars when divided by 100 converted_value = value_inputted * 100 updated_balance = balance + converted_value ``` This process ensures that new users are verified via email and that their initial stablecoin balances are set up correctly. ### Generating BSC/ETH Addresses: A Deep Dive into the Process <p float="left"> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nesgqhs9kmjr2ld1r5q.png" alt=" inputting request data containing containing name, email, amount, coin in swagger UI to generate address" /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nf5wc63ab72yarb2zp3f.png" alt=" response data in JSON containing the generated address and e.t.c" /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </p> *API endpoint for generating ETH or BSC address to receive stablecoins.* When building decentralized applications (dApps) or blockchain-related services, generating unique cryptocurrency addresses is a fundamental operation. In this subtopic, we'll explore how to generate BSC/ETH addresses, focusing on the interaction with the "generate address" API endpoint and the associated database operations. We'll also delve into the address generation logic and how it's securely implemented. #### The API Endpoint and Database Interaction When a request is made to the "generate address" API endpoint, it triggers a series of operations to generate a new BSC/ETH address. The system maintains a table named `AddrCounter` in the database, which keeps track of the total number of addresses generated. Here's a breakdown of the process: 1. **API Request Handling**: The endpoint receives a request to generate a new address. 2. **Database Update**: The `AddrCounter` table is queried and updated to increment the `total_count` of generated addresses. 3. **Address Generation**: The updated `total_count` is passed to the `gen_addr` function, which generates a new address. #### Code Snippet for Address Generation Below is a simplified version of how the address generation function might be implemented. Note that we're not storing the private key in the database for security reasons. ```python import os from some_blockchain_library import BIP44HDWallet, BIP44Derivation, EthereumMainnet def generate_eth_address(total_count): # Load mnemonic words from environment variables mnemonic = os.getenv("MNEMONIC_WORDS") # Initialize the HD wallet for Ethereum hd_wallet = BIP44HDWallet(cryptocurrency=EthereumMainnet) hd_wallet.from_mnemonic(mnemonic=mnemonic, language="english") hd_wallet.clean_derivation() # Derive the address using the total count as part of the path derivation_path = BIP44Derivation( cryptocurrency=EthereumMainnet, account=1, change=False, address=total_count ) hd_wallet.from_path(path=derivation_path) # Return the generated address return hd_wallet.address(), hdwallet.private_key() # Example usage new_address = generate_eth_address(total_count=1234)[0] # only need the address for now print(f"Generated Ethereum Address: {new_address}") ``` #### Flowchart of the Address Generation Process To make the process more understandable, here's a flowchart illustrating the steps: 1. **API Request**: A request is made to the "generate address" endpoint. 2. **Retrieve and Update Counter**: The system retrieves the current `total_count` from the `AddrCounter` table, increments it, and updates the table. 3. **Generate Address**: The updated `total_count` is passed to the `generate_eth_address` function. 4. **Return Address**: The function generates a new address and returns it to the requester. ![flowchart showing how to generate address](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h085avpuus89bu9aagf.png) #### Summary of the Address Generation Code The provided `gen_addr` function leverages a hierarchical deterministic (HD) wallet based on the BIP44 standard, specifically configured for the Ethereum Mainnet. Here's a summary of what the code does: 1. **Load Mnemonic Words**: Retrieves the mnemonic seed words from environment variables. 2. **Initialize HD Wallet**: Sets up the HD wallet using the mnemonic words. 3. **Clean Derivation Path**: Resets the wallet to its initial state. 4. **Derive Address**: Uses the total count as an address index in the derivation path to generate a new address. 5. **Return Address**: Returns the newly generated address (and private key, though the private key is ignored in practice for security). By following this structured approach, developers can securely generate and manage BSC/ETH addresses, ensuring that sensitive information like private keys is not unnecessarily exposed or stored. ### Checking for Payments to the Generated Address <p float="left"> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6micnx6untzwzyx2kxo6.png" alt=" web API request by inputting ETH address to check if stablecoin was sent to the address." /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fae35q7symn21qzxwc48.png" alt=" web API JSON response showing that stablecoin was deposited in the address." /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </p> *API endpoint to check if stablecoin has been successfully deposited.* Once an address is generated for a user and sent as a response, the address and the stablecoin the user wants to use for payment are passed to a worker service as arguments. This is achieved using Django Q scheduling by passing the arguments to a function called `check_address_balance`, which starts running. The address is checked for stablecoin deposits every minute for the next 60 minutes. #### Scheduling Address Balance Checks ```python Schedule.objects.get_or_create( name=instance.reference, func="gen_acc.task.task_one.check_address_balance", args=f"'{instance.bsc_address}', '{instance.coin}'", schedule_type=Schedule.MINUTES, minutes=1, repeats=60, ) ``` #### Checking Address Balance The `check_address_balance` function calls another function, `transact_info`, which uses the same arguments passed to it and makes a request to the external blockchain API to get the transaction data. #### Code for Checking Transactions ```python # task.py def transact_info(coin_name, address): # Request blockchain API to check for transactions # This part of the code is omitted for brevity if payment_made: return { "status": 0, # Indicates transaction occurred "status_code": res.status_code, # Response status code "block": block, # Transaction block number "tx_hash": tx_hash, # Transaction hash "from_address": from_address, # Address that sent the stablecoin "balance": balance, # Amount sent "success": success # Transaction successful } else: return { "status": 99, "status_code": 500 } def check_address_balance(coin, address): transact_info(coin, address) # Further processing based on transaction info ``` If the transaction is successful, the `balance` from the response data will be used to update the user's coin balance in the `UserToken` table. Other data will fill blank rows in the `GenerateAddress` table, and the schedule created to check the address will be deleted to prevent duplicated data. #### Sample Table Representation ```markdown | id | user_id | bsc_address | customer_name | customer_email | coin | actual_amount | reference | idd | network | currency | success | amount_paid | hashes | sender_address | block_number | date_added | |----|---------|----------------|----------------|-------------------------|------|---------------|-----------|-----|---------|----------|---------|--------------|--------|-----------------|--------------|-----------------------| | 1 | 1 | 0x123...abc | John Doe | [email protected] | USDT | 2050 | ref123 | 1 | testnet | USD | True | 2050 | 0xabc | 0x456...def | 123456 | 2024-06-30 12:34:56 | | 2 | 1 | 0x456...def | Jane Smith | [email protected] | DAI | 5000 | ref456 | 2 | testnet | USD | False | 0 | | | | 2024-06-30 12:45:00 | | 3 | 1 | 0x789...ghi | Bob Johnson | [email protected] | BUSD | 3000 | ref789 | 3 | testnet | USD | True | 3000 | 0xghi | 0x123...abc | 123457 | 2024-06-30 13:00:00 | ``` #### Internal Transaction Another asynchronous task using Django Q is created to move stablecoins from the generated address to the CryptPay address. To do this, a small amount of BSC or ETH is first sent to the generated address to cover the gas fee for sending the stablecoin. ##### Sending BNB for Gas Fee ```python from web3 import Web3 import os from dotenv import load_dotenv load_dotenv() bsc = os.getenv("NODE") # Blockchain RPC node w3 = Web3(Web3.HTTPProvider(bsc)) account_1 = os.getenv("CRYPTPAY_ADDRESS") nonce = w3.eth.get_transaction_count(account_1) gas_price = w3.eth.gas_price amount_to_send = gas_price * 90000 # Calculate amount of BNB to send def send_bnb(generated_address): tx = { 'chainId': 97, 'nonce': nonce, 'to': generated_address, 'value': w3.to_wei(amount_to_send, 'ether'), 'gas': 31000, 'gasPrice': w3.eth.gas_price } signed_tx = w3.eth.account.sign_transaction(tx, os.getenv("Private_key")) # Sign the transaction using the private key tx_hash = w3.eth.send_raw_transaction(signed_tx.rawTransaction) return w3.to_hex(tx_hash) ``` The transaction hash returned by the `send_bnb` function can be used to check if the transaction was successful. If successful, the `transfer_erc20` function is called to transfer the stablecoin from the generated address to the CryptPay address. ##### Transferring StableCoin The `transfer_erc20` function is designed to transfer ERC-20 tokens from one address to another on the Ethereum blockchain (or compatible blockchains like Binance Smart Chain). Here is a detailed explanation of the function: ```python import json with open("gen_acc/utilis/erc20.json") as write_file: data = json.load(write_file) abi = data['abi'] # Accepted coin contract address on Binance Smart Chain testnet accepted_coin = { "USDT": "0x337610d27c682E347C9cD60BD4b3b107C9d34dDd", "BUSD": "0xeD24FC36d5Ee211Ea25A80239Fb8C4Cfd80f12Ee", "DAI": "0xEC5dCb5Dbf4B114C9d0F65BcCAb49EC54F6A0867" } def transfer_erc20(receiver_address, sender_address, coin, amount, priv_key): run_tx = False contract_addr = accepted_coin[coin] # Get coin contract address coin_contract = w3.eth.contract(address=contract_addr, abi=abi) try: # Check BNB balance to ensure enough for the transaction fee balance = w3.eth.get_balance(sender_address) balance_in_ether = w3.from_wei(balance, 'ether') # Calculate gas fee gas_e = 78888 # Gas limit 1 gas_f = 54154 # Gas limit 2 tx_fee = w3.eth.gas_price # Current gas price total_txFee1 = tx_fee * gas_e total_txFee2 = tx_fee * gas_f fee_in_ether_one = w3.from_wei(total_txFee1, 'ether') fee_in_ether_two = w3.from_wei(total_txFee2, 'ether') # Check if the balance is greater than the transaction fee if balance_in_ether > fee_in_ether_one: gas_limit = gas_e run_tx = True elif balance_in_ether > fee_in_ether_two: gas_limit = gas_f run_tx = True else: print("Transaction fee is high right now") # ERC-20 transfer if run_tx: nonce = w3.eth.get_transaction_count(sender_address) tx = { 'nonce': nonce, 'gas': gas_limit, 'gasPrice': w3.eth.gas_price } amount = w3.to_wei(amount, 'ether') trans = coin_contract.functions.transfer(receiver_address, amount).build_transaction(tx) signed_tx = w3.eth.account.sign_transaction(trans, priv_key) tx_hash = w3.eth.send_raw_transaction(signed_tx.rawTransaction) return w3.to_hex(tx_hash) except: print("There was an error") ``` ##### Explanation The `transfer_erc20` function handles the transfer of ERC-20 tokens from a sender's address to a receiver's address. Here's a step-by-step explanation of what the function does: 1. **Initialize and Load ABI**: It loads the ABI (Application Binary Interface) for the ERC-20 token contract from a JSON file. 2. **Set Contract Address**: It retrieves the contract address of the specified token (e.g., USDT, BUSD, DAI) from the `accepted_coin` dictionary. 3. **Create Contract Instance**: It creates an instance of the contract using Web3's `eth.contract` method. 4. **Check Balance**: It checks the balance of the sender's address to ensure there is enough Ether to cover the transaction fees. 5. **Calculate Gas Fees**: It calculates the gas fees required for the transaction based on the current gas price and predefined gas limits. 6. **Determine if Transaction is Feasible**: It checks if the sender's balance is sufficient to cover the calculated gas fees. If so, it sets `run_tx` to `True`. 7. **Build Transaction**: If the transaction is feasible, it builds the transaction using the contract's `transfer` function. It sets the transaction's `nonce`, `gas`, and `gasPrice`. 8. **Sign Transaction**: It signs the transaction with the sender's private key. 9. **Send Transaction**: It sends the signed transaction to the blockchain using `w3.eth.send_raw_transaction`. 10. **Return Transaction Hash**: It returns the transaction hash, which can be used to track the transaction status on the blockchain. This function is crucial for moving stablecoins from a user's generated address to the main CryptPay address after the user has made a payment. It ensures that the transfer is secure and that all necessary fees are covered. This setup ensures that transactions are processed efficiently and securely, leveraging the capabilities of Django Q for task scheduling and Web3 for blockchain interactions. The `transfer_erc20` function is also called asynchronously using Django Q, whenever users request the transfer API endpoint to withdraw stablecoins. This ensures that the transaction is processed without blocking the main application flow, providing a smooth and efficient user experience. ### API Documentation I have created comprehensive API documentation. Below is a GIF showcasing the API documentation using swagger: ![API Documentation GIF](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ado4ay03icmo8xs2005h.gif) For detailed API references and to start using Cryptpay, visit our [API Documentation](https://noble-timmi-simeon2001-997db307.koyeb.app/api/document). Copy your generated address and fund it [here](https://www.bnbchain.org/en/testnet-faucet) with stablecoin to try the Cryptpay API out. ## Conclusion Building Cryptpay has been an enlightening journey, inspired by the innovative work of Emmanuel Njoku's Lazerpay. Despite Lazerpay's unfortunate shutdown, its vision continues to inspire new projects like Cryptpay. The development of Cryptpay, built on the Binance Smart Chain (BSC) testnet, demonstrates the immense potential of blockchain technology in transforming global payments. Blockchain technology offers a fast, secure, and cost-effective way to handle transactions, eliminating the need for middlemen and reducing transaction fees. Financial institutions are recognizing the substantial savings and efficiency gains that distributed ledger technology can provide. Cryptpay aims to leverage these advantages, making it easy for businesses and individuals to integrate stablecoin payments into their applications, whether for e-commerce, donations, or other use cases. By supporting stablecoins like USDT, DAI, and BUSD, Cryptpay offers a stable and reliable means of transaction in the volatile world of cryptocurrency. This blog post has walked through the technical aspects of building a blockchain-powered payment infrastructure, from generating and monitoring payment addresses to handling transactions securely. The detailed explanations and code snippets provided should help you understand the inner workings of such a system and inspire you to create your blockchain-powered applications. In conclusion, the future of payments is undoubtedly heading towards blockchain, and projects like Cryptpay are at the forefront of this revolution. By simplifying the integration of crypto payments, Cryptpay aims to bridge the gap between traditional finance and the burgeoning world of digital currencies, paving the way for a more connected and efficient global economy. If you have any questions, feel free to leave them in the comment section.
simeon2001
1,913,122
How To Sort Array of Strings
I was building a pdf merger that takes two pdf files, merge them, then return a single file - result...
0
2024-07-05T19:07:49
https://dev.to/codarbind/how-to-sort-array-of-strings-57m3
javascript, webdev
I was building a [pdf merger]( https://github.com/codarbind/pdf-merger) that takes two pdf files, merge them, then return a single file - result of the merging. There is a requirement that the files must be in order, i.e. a particular file has to be at the top. Since I exposed this via an endpoint, and I am using multer to manage file uploads as shown below ``` upload.fields([{ name: 'pdf1', maxCount: 1 }, { name: 'pdf2', maxCount: 1 }]) ``` Multer would not order the files, and there is no guarantee on how what I would get from Multer I had to resort to sorting the array of 'processed files' from multer. I sorted by 'filename.' ``` files.sort((a, b) => a.filename - b.filename); ``` To my greatest shock, the array was not sorted. I hurriedly went to my terminal. launched REPL and tried it, then it got sorted on REPL. ![repl sorted the array of strings] (https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r96y2afjpsilfxh4wdmv.png) 😲 How? Why? Then I went back to the basics of sorting, comparing, and returning -1, 0, or 1 if the first argument is lesser, equal, or greater than the second argument, respectively. Like this - ![sorting array of strings with guarantee](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wfisw4k7s2i6bwxh8dd.png) Have you encountered a similar issue before? Or an inconsistency on REPL? I would love to hear from you.
codarbind
1,913,115
What is Open Source Success?
In this video, we look at some of the open source metrics you might use to determine whether or not a project is successful.
0
2024-07-05T19:05:41
https://dev.to/opensauced/what-is-open-source-success-382n
opensource
--- title: What is Open Source Success? published: true description: In this video, we look at some of the open source metrics you might use to determine whether or not a project is successful. tags: opensource --- ## Mentioned Resources - [State of JS 2023 - metaframeworks](https://app.opensauced.pizza/workspaces/69c010b5-222b-4028-b61f-842484aac012) - [Gatsby repo page](https://app.opensauced.pizza/s/gatsbyjs/gatsby) - [Next.js Repo Page](https://app.opensauced.pizza/s/vercel/next.js) - [State of JS Repor](https://2023.stateofjs.com/en-US/libraries/meta-frameworks/) - [Jason Huggins X Post](https://x.com/hugs/status/1804285915516338506)
bekahhw
1,913,120
Weekly Updates - July 5, 2024
Hi everyone 👋 It’s July meaning we’ve entered the second half of the year. At Couchbase we don’t do...
0
2024-07-05T19:04:05
https://dev.to/couchbase/weekly-updates-july-5-2024-2obj
couchbase, community, cpp, rails
Hi everyone :wave: It’s July meaning we’ve entered the second half of the year. At Couchbase we don’t do things by half measure so here are some of the latest happenings. - 👏 **Announcing General Availability of the C++ SDK for Couchbase** - We are thrilled to announce the General Availability (GA) of the C++ SDK, which adds support for native C++ language to our existing comprehensive set of SDK libraries in 11 programming languages and marks a significant milestone in our commitment to providing robust, high-performance tools for developers to build modern, scalable applications. [*Learn more here >>*](https://www.couchbase.com/blog/announcing-general-availability-of-the-cpp-sdk-for-couchbase/) <br> - :book: **Couchbase on Rails: A Guide to Introducing Dynamic and Adaptive Data to Your Application** - Our Senior Dev Evangelist Ben Greenberg provides insights as well as sample code to integrate Couchbase into your Ruby on Rails application. [*Check out the guide >>*](https://www.couchbase.com/blog/couchbase-rails-guide-adaptive-data/) <br> - 💻 **New Blog Post: Accelerate Couchbase-Powered RAG AI Application With NVIDIA NIM/NeMo and LangChain** This post introduces a concept of an interactive chatbot based on a Retrieval Augmented Generation (RAG) architecture with Couchbase Capella as a Vector database as we announce our new integration with NVIDIA NIM/NeMo. [*Read more >>*](https://www.couchbase.com/blog/accelerate-rag-ai-couchbase-nvidia/) <br> - :mortar_board: **ICYMI: Benchmarking Databases with YCSB* Now on demand, our recent webcast covers everything you need to know to benchmark databases effectively on your own using the Yahoo! Cloud Serving Benchmark (YCSB). [*Watch now >>*](https://info.couchbase.com/benchmarking-databases-with-ycsb-2024june.html?utm_campaign=adaptive-apps&utm_medium=community&utm_source=discord&utm_content=webinar&utm_term=developer) See you next week!
brianking
1,913,119
Uncovering Infidelity: How I Found My Wife Cheating on Her Phone
In today's digital age, where social media and dating apps have become integral to many people's...
0
2024-07-05T19:01:59
https://dev.to/colten_parker_ed80d13923f/uncovering-infidelity-how-i-found-my-wife-cheating-on-her-phone-5df9
lostcrypto, hireahackeronline, anonymoushelp
In today's digital age, where social media and dating apps have become integral to many people's daily routines, the dynamics of relationships and the ways inferences are drawn about fidelity have evolved significantly. The discovery of a partner's infidelity can be a devastating blow to any individual, challenging the very foundation of trust and commitment that a relationship is built upon. The phrase "wife caught cheating" is not only a search term that brings to light numerous stories of heartbreak and betrayal but also underscores an increasing trend of affairs being facilitated and unearthed through digital means. From text messages to secretive social media interactions, the evidence of cheating is often hidden in plain sight, on the very devices that facilitate our closest connections. The transition from mere suspicion to considering the possibility of infidelity involves piecing together these scattered, yet related, behaviors. Observing these changes prompts a more focused scrutiny of phone records, social media interactions, and other digital footprints that could provide concrete evidence of cheating. This stage is critical as it sets the foundation for either confirming or alleviating the suspicions of infidelity. In most cases, it is difficult to get access to cheaters phones because they are cautious and implement measures like changing their passwords or being overprotective of their phones. For that, I opted to look for a way to hack her and monitor it from my end. I got hold of a phone hacker known as CyberPunk Programmers. They gave me a software which is integrated with her phone and within a day, I got all the evidence I needed, from a range of texts, photos, and phone calls. The process of getting the software was pretty easy. I only had to email them through cyberpunk (@) programmer (.) net. Communicate with them to clear your conscience.
colten_parker_ed80d13923f
1,912,107
Using the @Lookup Annotation in Spring
The @Lookup annotation is an injection (like @Inject, @Resource, @Autowired) annotation used at the...
27,602
2024-07-05T19:00:00
https://springmasteryhub.com/2024/07/05/using-the-lookup-annotation-in-spring/
java, programming, spring, springboot
The `@Lookup` annotation is an injection (like `@Inject`, `@Resource`, `@Autowired`) annotation used at the method level. This annotation tells Spring to overwrite the method, redirecting to the bean factory to return a bean matching the return type of the method. This can be useful for some bean scopes, such as the prototype scope, which will return a new instance of the bean each time it is requested. By combining it with the `@Lookup` annotation, you can make things more dynamic. ## Use Cases for `@Lookup` Imagine that you have a bean that handles user-specific session data. Each request should have its own instance to avoid data conflicts. ### Code example: Let's say you have a prototype bean that gets the exact moment a user of your system has made a request. ```java @Component @Scope("prototype") public class UserSessionBean { private String userData; public UserSessionBean() { this.userData = "Session data for " + System.currentTimeMillis(); } public String getUserData() { return userData; } } ``` Now you want Spring to get you a new bean to process a request and use this new bean's information to do some processing. In this example, it will look like a session object that can hold user information during the request context. ```java @Component public class UserService { @Lookup public UserSessionBean getUserSessionBean() { // Spring will override this method to return a new instance of UserSessionBean return null; } public String processUserSession() { UserSessionBean userSessionBean = getUserSessionBean(); return "Processing user session with data: " + userSessionBean.getUserData(); } } @RestController @RequestMapping("/users") public class UserController { private final UserService userService; public UserController(UserService userService) { this.userService = userService; } @GetMapping("/session") public String getUserSession() { return userService.processUserSession(); } } ``` ## Conclusion By using the `@Lookup` annotation, you can make things more dynamic. In this example, it allows you to handle user-specific sessions, but you can be creative and create many more scenarios using this annotation. If you like this topic, make sure to follow me. In the following days, I’ll be explaining more about Spring annotations! Stay tuned! Follow me!
tiuwill
1,913,118
Comprehensive Guide to Skin Whitening Treatments in Gurgaon
We specialize at Irvin Cosmetics, Gurgaon in transforming your skin using advanced treatments...
0
2024-07-05T18:58:00
https://dev.to/irvinskingurgaon/comprehensive-guide-to-skin-whitening-treatments-in-gurgaon-370b
skin, treatment, gurgaon, webdev
We specialize at Irvin Cosmetics, Gurgaon in transforming your skin using advanced treatments personalized for you. **[Skin Treatment in Gurgaon](https://www.irvincosmetics.com/skin-treatment-in-gurgaon)** Incredible results are possible when our experts who are doctors and skin care specialists mesh latest technologies with established methods. You might want to rejuvenate your skin, fight against acne, stop aging symptoms or just make yourself brighter; so, we offer laser treatments among other things like chemical peeling.
irvinskingurgaon
1,913,117
2D and 3D graph Visualisation from JSON
🌐Transform JSON into dynamic 2D &amp; 3D graphs instantly! Explore 3D graph with pan, orbit, and...
0
2024-07-05T18:54:04
https://dev.to/bugblitz98/2d-and-3d-graph-visualisation-from-json-494p
webdev, javascript, programming, beginners
🌐Transform JSON into dynamic 2D & 3D graphs instantly! Explore 3D graph with pan, orbit, and rotate features. Choose from force-directed, radial out, or tree layouts. Full-screen option available! Try now at [json2graph.com](url) 📊 #DataViz #JSON #Graphs #developers #visulization ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4cu83xv10jpsst23pud.png)
bugblitz98
1,913,116
Google I/O Flutter Updates and Firebase
Catch the latest updates on Flutter and Firebase from Google I/O 2024. Hear Roman Jacquez's insights and explore the exciting new features.
25,852
2024-07-05T18:53:43
https://codingcat.dev/podcast/google-i-o-flutter-updates-and-firebase
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/google-i-o-flutter-updates-and-firebase {% youtube https://youtu.be/u562QZk-OFk %} ## Introduction and Guest Background * _Guest Introduction:_ Roman Jacquez introduces himself, describing his background. Originally from the Dominican Republic, Roman shares his experience of moving to New York, studying computer science, and his initial interest in civil engineering before transitioning to software engineering. ## Career Journey * _Early Career:_ Roman talks about his early days working in Lion Bridge as a desktop publisher and his passion for software engineering that led him to create internal tools. * _Philips Health System:_ Roman discusses his role at Philips Health System where he pushed for the adoption of Flutter within his team, leading to significant projects supported by the Bill and Melinda Gates Foundation. ## Flutter and Google's Developer Ecosystem * _Google I/O and Flutter:_ Discussion revolves around the 2023 Google I/O event and key Flutter announcements. Roman highlights improvements to Flutter’s Impeller rendering engine and introduction to WASM (WebAssembly) support. * _Dart Language and Multi-Platform:_ Roman and Alex discuss advancements in the Dart language including macros. They also talk about Google's strategy in offering multiple development options such as Flutter and Kotlin Multi-Platform. ## Firebase and Flutter Integration * _Firebase Data Connect:_ Roman shares his insights on Firebase Data Connect and his excitement over the integration of PostgreSQL and GraphQL, providing more versatility compared to the traditional NoSQL Firebase approach. * _Firebase Tools and Features:_ Emphasizes the range of Firebase tools like Remote Config, Firebase Authentication, and Cloud Messaging, and how these tools can be effectively integrated with Flutter to build comprehensive applications. ## AI and Machine Learning Integration * _Gemini and Google AI:_ Roman discusses his experiments with Google's generative AI package and Vertex AI, explaining how these make implementing advanced AI functionalities accessible even for those with limited AI expertise. * _Multi-Modal Capabilities:_ Explains the potential of combining various media formats (text, video, images) using Gemini’s multi-modal capabilities, which allows for sophisticated applications closer to real-time utility and richer user experiences. ## Community Involvement and Future Outlook * _Developer Community:_ Roman is actively involved in GDG (Google Developer Group) Lawrence, highlighting the importance of community in learning and sharing new technology advancements. * _Future Projects:_ Mentions upcoming initiatives and projects involving Flutter, Firebase, and AI. These include workshops and events aimed at fostering community and learning. ## Entertainment and Personal Interests * _Personal Picks by Roman:_ Roman shares his recommendation for the TV series "Beef," emphasizing its gripping narrative. * _Technology Picks by Alex:_ Alex discusses Next.js updates, highlighting features in the upcoming version 15 and praising its potential benefits for web developers. This structured summary highlights key themes and subtopics discussed, providing an overview that's easy to skim through.
codercatdev
1,913,114
What Makes ChatGPT So Smart? Unveiling the Secrets Behind Its Intelligence
Understanding What Makes ChatGPT So Intelligent Ever wondered what makes ChatGPT appear so...
0
2024-07-05T18:48:11
https://dev.to/davitacols/what-makes-chatgpt-so-smart-unveiling-the-secrets-behind-its-intelligence-2egb
# Understanding What Makes ChatGPT So Intelligent Ever wondered what makes ChatGPT appear so intelligent and capable of holding meaningful conversations? The secret lies in a combination of advanced machine learning techniques, vast training data, and powerful computational resources. In this post, we'll explore the key elements that contribute to ChatGPT's impressive capabilities. ## Large-Scale Training Data ### Diverse Datasets ChatGPT is trained on a wide array of internet text, including books, articles, websites, and more. This exposure to diverse topics and writing styles enables the model to generate coherent and contextually relevant responses across various subjects. ### Volume The sheer volume of data used in training allows the model to learn intricate patterns in language and knowledge, making its responses more accurate and nuanced. ## Advanced Neural Network Architecture ### Transformers ChatGPT is built on the Transformer architecture, which excels at understanding and generating human language. Transformers use self-attention mechanisms to weigh the importance of different words in a sentence, helping the model grasp context and relationships within the text. ### Deep Learning The model consists of multiple layers, or "deep" learning, which allows it to understand complex representations and subtleties in language, contributing to its sophisticated responses. ## Pretraining and Fine-Tuning ### Pretraining Initially, ChatGPT undergoes unsupervised training on a large corpus of text. During this phase, it learns grammar, facts about the world, and basic reasoning abilities, forming the foundation of its knowledge. ### Fine-Tuning After pretraining, the model is fine-tuned with supervised learning on a narrower dataset, with human-reviewed examples guiding the refinement process. This step ensures its responses are more accurate and appropriate. ## Scalability ### Model Size ChatGPT has billions of parameters, or weights, that it learns during training. This extensive network of parameters enables it to capture vast amounts of information and generate high-quality text. ### Computational Power The training process leverages significant computational resources, often using GPUs and TPUs to handle complex calculations efficiently. This computational power is crucial for processing the large-scale data and training the deep learning model. ## Reinforcement Learning from Human Feedback (RLHF) ### Feedback Loops Post-deployment, ChatGPT is further refined using feedback from human users. User interactions are collected, and the model's responses are improved based on this data. ### Ranking and Reward Human evaluators rate different model outputs, and these ratings train the model to produce more preferred responses. This reinforcement learning approach helps the model align better with human expectations. ## Continuous Improvement ### Updates ChatGPT undergoes periodic updates with new data and techniques, keeping it current with the latest information and improving its performance over time. ### Research and Development Ongoing research in AI and machine learning contributes to incremental improvements in model architecture, training techniques, and application methods, enhancing ChatGPT's capabilities. ## Conclusion ChatGPT's intelligence is the result of a sophisticated blend of advanced machine learning algorithms, extensive training data, powerful computational infrastructure, and continuous refinement through human feedback. These elements combine to create a model capable of understanding and generating human-like text with impressive coherence and relevance. Understanding these underlying mechanisms gives us a deeper appreciation of the technology that powers ChatGPT and its potential to transform the way we interact with machines.
davitacols
1,913,109
Recapping the AI, Machine Learning and Computer Meetup — July 3, 2024
We just wrapped up the July ‘24 AI, Machine Learning and Computer Vision Meetup, and if you missed it...
0
2024-07-05T18:46:45
https://voxel51.com/blog/recapping-the-ai-machine-learning-and-computer-meetup-july-3-2024/
computervision, datascience, machinelearning, ai
We just wrapped up the July ‘24 [AI, Machine Learning and Computer Vision Meetup](https://voxel51.com/computer-vision-ai-meetups/), and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings and highlights from the presentations as well as the upcoming Meetup schedule so that you can join us at a future event. ## Performance Optimisation for Multimodal LLMs {% embed https://www.youtube.com/watch?v=49x8GVlqrOs %} In this talk we’ll delve into Multi-Modal LLMs, exploring the fusion of language and vision in cutting-edge models. We’ll, highlight the challenges in handling diverse data heterogeneity, its architecture design, strategies for efficient training, and optimization techniques to enhance both performance and inference speed. Through case studies and future outlooks, we’ll illustrate the importance of these optimizations in advancing applications across various domains. **Speaker:** [Neha Sharma](https://www.linkedin.com/in/hashux/) has a rich background in digital products and technology services, having delivered successful projects for industry giants like IBM and launching innovative products for tech startups. As a Product Manager at Ori, Neha specializes in developing cutting-edge AI solutions by actively engaging on various AI-based use cases centered around latest/popular LLMs, demonstrating her commitment to staying at the forefront of AI technology. ## 5 Handy Ways to Use Embeddings, the Swiss Army Knife of AI {% embed https://www.youtube.com/watch?v=IiujPchwuqo %} Discover the incredible potential of vector search engines beyond RAG for large language models! Explore 5 handy embeddings applications: robust OCR document search, cross-modal retrieval, probing perceptual similarity, comparing model representations, concept interpolation, and a bonus—concept space traversal. Sharpen your data understanding and interaction with embeddings and open source FiftyOne. **Speaker:** [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI. ## Deep Dive: Responsible and Unbiased GenAI for Computer Vision {% embed https://www.youtube.com/watch?v=hEROpgx2bNE %} In the rapidly evolving landscape of artificial intelligence, the emergence of Generative AI (GenAI) marks a transformative shift in the field. But can too fast of an adoption lead to some costly mistakes? In this talk, we’ll delve into the pivotal role GenAI will play in the future of computer vision use cases. Through an exploration of image datasets and latest diffusion models, we will use FiftyOne – the open source data management tool for visual datasets – to demonstrate the leading ways GenAI is being adopted into computer vision workflows. We will also address concerns about how GenAI can potentially poison data, emphasizing the importance of vigilant data curation to ensure dependable and remarkable datasets. **Speaker:** [Daniel Gural](https://www.linkedin.com/in/daniel-gural/) is a seasoned Machine Learning Evangelist with a strong passion for empowering Data Scientists and ML Engineers to unlock the full potential of their data. Currently serving as a valuable member of Voxel51, he takes a leading role in efforts to bridge the gap between practitioners and the necessary tools, enabling them to achieve exceptional outcomes. Daniel’s extensive experience in teaching and developing within the ML field has fueled his commitment to democratizing high-quality AI workflows for a wider audience. ## Join the AI, Machine Learning and Data Science Meetup! The combined membership of the [Computer Vision and AI, Machine Learning and Data Science Meetups](https://voxel51.com/computer-vision-ai-meetups/) has grown to over 20,000 members! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies.  Join one of the 12 Meetup locations closest to your timezone. - [Athens](https://www.meetup.com/athens-ai-machine-learning-data-science) - [Austin](https://www.meetup.com/austin-ai-machine-learning-data-science) - [Bangalore](https://www.meetup.com/bangalore-ai-machine-learning-data-science) - [Boston](https://www.meetup.com/boston-ai-machine-learning-data-science) - [Chicago](https://www.meetup.com/chicago-ai-machine-learning-data-science) - [London](https://www.meetup.com/london-ai-machine-learning-data-science) - [New York](https://www.meetup.com/new-york-ai-machine-learning-data-science) - [Peninsula](https://www.meetup.com/peninsula-ai-machine-learning-data-science) - [San Francisco](https://www.meetup.com/sf-ai-machine-learning-data-science) - [Seattle](https://www.meetup.com/seattle-ai-machine-learning-data-science) - [Silicon Valley](https://www.meetup.com/sv-ai-machine-learning-data-science) - [Toronto](https://www.meetup.com/toronto-ai-machine-learning-data-science) ## What’s Next? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdiymcs0zqy1e07e78hd.png) Up next on August 8th, 2024 at 10:00 AM PT and 1:00 PM ET, we have three great speakers lined up! **GenAI for Video: Diffusion-Based Editing and Generation** - [Ozgur Kara](https://www.linkedin.com/in/ozgurr-kara/), Georgia Institute of Technology / Adobe **Evaluating RAG Models for LLMs: Key Metrics and Frameworks** - [Abi Aryan](https://www.linkedin.com/in/goabiaryan/), founder of Abide AI **Why You Should Evaluate Your End-to-End LLM applications with In-House Data** - [Mahesh Deshwal](https://www.linkedin.com/in/deshwalmahesh/) - Senior Data Scientist, Chegg, Inc Register for the Zoom [here](https://voxel51.com/computer-vision-events/ai-machine-learning-computer-vision-meetup-aug-8-2024/). You can find a complete schedule of upcoming Meetups on the [Voxel51 Events page](https://voxel51.com/computer-vision-events/). ## Get Involved! There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these: - You’d like to speak at an upcoming Meetup - You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup - You’d like to co-organize a Meetup - You’d like to co-sponsor a Meetup Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping me over [LinkedIn](https://www.linkedin.com/in/jiguerrero/) to discuss how to get you plugged in. _These Meetups are sponsored by [Voxel51](https://voxel51.com/), the company behind the open source [FiftyOne](https://github.com/voxel51/fiftyone) computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to [get started](https://voxel51.com/docs/fiftyone/index.html), in just a few minutes._
jguerrero-voxel51
1,913,057
Techniques for Synchronous DB Access in TypeScript
I am developing an ORM library for TypeScript called Accel Record. Unlike other TypeScript/JavaScript...
27,598
2024-07-05T17:46:07
https://dev.to/koyopro/techniques-for-synchronous-db-access-in-typescript-2976
typescript, node, orm, database
I am developing an ORM library for TypeScript called [Accel Record](https://www.npmjs.com/package/accel-record). Unlike other TypeScript/JavaScript ORM libraries, Accel Record adopts a synchronous API instead of an asynchronous one. However, to execute DB access synchronously, it was necessary to conduct thorough technical research. In this article, I will introduce the techniques Accel Record employs to achieve synchronous DB access. ## Supported Databases Accel Record supports the following databases: - SQLite - MySQL - PostgreSQL In the early stages of development, priority was given to supporting SQLite and MySQL. Therefore, this article focuses on SQLite and MySQL. ## SQLite When using SQLite with Node.js, the [better-sqlite3](https://www.npmjs.com/package/better-sqlite3) library is commonly used. Other ORM libraries also frequently use `better-sqlite3` to access SQLite. Upon investigation, we found that `better-sqlite3` inherently provides a synchronous API. Thus, executing queries to SQLite synchronously was easily achievable using `better-sqlite3`. ## MySQL The challenge was with MySQL. When using MySQL with Node.js, the [mysql2](https://www.npmjs.com/package/mysql2) library is commonly used. However, `mysql2` only offers an asynchronous API, making it impossible to use a synchronous API. We searched for other MySQL libraries that offered a synchronous API but could not find any that were well-maintained recently. Next, we investigated whether there was a way to execute asynchronous APIs synchronously. We found several older libraries claiming to execute MySQL queries synchronously, and we examined how they achieved synchronous processing. The first method involved using [Atomics.wait()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Atomics/wait). This method employs two threads: one for performing asynchronous operations and one for synchronously waiting for the result. Libraries such as [synckit](https://www.npmjs.com/package/synckit) wrap this functionality to make it more user-friendly. However, `synckit` cannot be used outside the main thread and is not easily usable in a multi-threaded environment. In the Accel Record project, we use Vitest for testing. Vitest performs parallel testing using Node.js `worker_threads`, making this constraint a barrier to adoption. The second method led us to a library called [sync-rpc](https://www.npmjs.com/package/sync-rpc). This library uses [Node.js's child_process module](https://nodejs.org/api/child_process.html) to create a separate process for executing asynchronous operations and waits synchronously for the result. Upon testing, we found that we could use `sync-rpc` to leverage `mysql2`'s asynchronous API synchronously. However, since `sync-rpc` itself is an older library and did not always perform as expected, we incorporated its source code and made necessary modifications to achieve the desired functionality. ## How sync-rpc Works `sync-rpc` operates as follows: 1. The main process specifies the entry point file and starts a child process. 2. The child process reads the entry point file and starts as a server. 3. The main process requests function execution from the child process and waits synchronously for the result. 4. The child process executes the asynchronous function and returns the result to the main process. 5. The main process receives the result from the child process and continues processing synchronously. 6. When the main process exits, the child process also terminates. Using `sync-rpc`, we realized that any asynchronous process could be used synchronously from the perspective of the main process. ## Current Implementation of Accel Record Currently, by using `sync-rpc`, we can execute asynchronous processes synchronously. Therefore, regardless of the database engine, queries are executed through `sync-rpc` for SQLite and PostgreSQL as well. Specifically, SQL construction is performed in the main process, and only the query execution is handled by the child process using `sync-rpc`. ## Future Improvements While the current implementation uses `sync-rpc` to execute asynchronous processes synchronously, it relies on launching a child process. However, using child processes has its drawbacks: - Overhead of inter-process communication - There is overhead due to data exchange between the main process and the child process. - Generally, this overhead is not significantly large compared to DB access latency, so it may not be a major issue in this case. - Operational complexity - Launching child processes can complicate operations. - Currently, we depend on Node.js's `child_process` for launching child processes, which might make it difficult to operate in environments other than Node.js. - It is expected to work properly in typical Node.js environments and serverless environments where Node.js runs (e.g., AWS Lambda, Vercel Functions). If we find a method that can overcome these drawbacks, we would consider adopting it. ## Summary We introduced the techniques Accel Record considered and adopted to achieve synchronous DB access. During the research phase, we explored methods to execute asynchronous processes synchronously using multi-threading and inter-process communication. Ultimately, we adopted `sync-rpc`, which spawns a separate process, to execute queries synchronously. Please check out '[Introduction to "Accel Record": A TypeScript ORM Using the Active Record Pattern](https://dev.to/koyopro/introduction-to-accel-record-a-typescript-orm-using-the-active-record-pattern-2oeh)' and the [README](https://github.com/koyopro/accella/blob/main/packages/accel-record/README.md) to see what kind of interface Accel Record can achieve by adopting a synchronous API.
koyopro
1,913,112
Introducing: What the Portal!
Today we're launching What the Portal! ...what on earth is that, you ask? What the Portal is an...
0
2024-07-05T18:45:40
https://dev.to/what-the-portal/introducing-what-the-portal-n8n
webdev, javascript, devops, productivity
Today we're launching What the Portal! ...what on earth is that, you ask? What the Portal is an engineering visibility platform enabling developers to see everything happening in one place. Open Pull Requests, new tasks & issues, error logs, deployments, workflow runs, and so much more. Instead of having 20 tabs open and hoping not to miss something (you're probably missing something), What the Portal gives individual dashboard-esq views into everything happening at once. 1 tab, much sanity. ## Why visibility? Over the years we've realized something - and it's going to sound silly. Knowing what's going on with your code, app(s), & team(s) makes you an incredibly faster & more valuable engineer. That used to not be so difficult "back in the day" when developing websites & software without version control and directly uploading files to LAMP servers. In spite of numerous simplifications in the software industry, there's more things to pay attention to now than ever before. This really starts to add up and wear on a person's sanity, drastically lowering how productive you can be. ## What is What the Portal? WTP is a Saas platform that integrates directly with the places you work - GitHub, Linear, Sentry, etc. - and intelligently presents the meaningful, actionable, relevant pieces to you at all times. This is done through what we call Portals (think of dashboards) and widgets. Freely arrange your Portal to see the happenings you care about, in the way you care to see them, with near real-time data updates. ## How is it useful? The summarized pitch: What the Portal makes it easy to stay informed of team-wide development activity and action items. Let's break that down into a digestible list: - [Integrations](https://whattheportal.com/docs/integrations): industry-leading tools & services like Git providers, Issue trackers, Log aggregators, & more. - [Branches](https://whattheportal.com/docs/widgets/branches): see new work started by peers, know when `main` gets updated, etc. - [Deployments](https://whattheportal.com/docs/widgets/deployments): finally know when your code lands where it was supposed to (or didn't). - [Events](https://whattheportal.com/docs/widgets/events): all the "happenings" across all your apps & teams - errors, triggers, goals, etc. - [Issues](https://whattheportal.com/docs/widgets/issues): see when things go wrong and the work to do - and who's doing what. - [Pull Requests](https://whattheportal.com/docs/widgets/pull-requests): the easiest way to see when code needs reviewing, or been stuck waiting for too long. - [Workflows](https://whattheportal.com/docs/widgets/workflows): our personal favorite - see all the individual workflow runs, and checks/jobs within them for ultimate visibility. - [Sprints & Cycles](https://whattheportal.com/docs/widgets/sprints): see how your team is progressing through their workloads and lend a hand where needed. - and so, so much more... Even just listing a summary of widgets & integrations seems overwhelming! Head on over to [make your first Portal](https://app.whattheportal.com), and start seeing how much easier dev life can get. Stay tuned for more updates! excerpt from the official What the Portal blog [announcement post](https://whattheportal.com/blog/introducing-what-the-portal).
whattheportal
1,913,113
Starting a REST API with Fastify, Postgres, Docker and Redis
Using this boilerplate from Github to bootstrap your REST API with Node, TypeScript, Fastify,...
0
2024-07-05T18:45:05
https://dev.to/opchaves/starting-a-rest-api-with-fastify-postgres-docker-and-redis-4k7m
typescript, node, backend
Using [this boilerplate from Github](https://github.com/nicolabovolato/fastify-node-api) to bootstrap your REST API with Node, TypeScript, Fastify, Postgres, Docker, Redis and Swagger. Consider you have experience and know what you are doing, using a boilerplate like the one mentioned above can be a really good starting point since you may want to focus on the business logic of your application and ship things to production as soon as possible rather than worrying about putting all the pieces together to finally be able to start developing your app. _Disclaimer: I know I could just use a "batteries included" framework to get started with but that is not the goal here._ In my case I was looking for something that included database access, background jobs, caching, testing, logging, linting, typescript, authentication, docker and api documentation. This repository, which is MIT licensed, contains all of this which is great. —— Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture. It is inspired by Hapi and Express and as far as we know, it is one of the fastest web frameworks in town. (github.com/fastify/fastify) PostgreSQL is a powerful, open source object-relational database system with over 35 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. (Wikipedia) Redis is a source-available, in-memory storage, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. (Wikipedia) Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first released in 2013 and is developed by Docker, Inc. (Wikipedia)
opchaves
1,913,111
HOSTING A STATIC WEBSITE
INTRODUCTION Static websites are the kind of websites that always maintain the same appearance to...
0
2024-07-05T18:44:10
https://dev.to/presh1/hosting-a-static-website-2kgg
createstorageaccount, enablestaticwebsite, uploadfiles, testwebsiteusingurl
**INTRODUCTION** Static websites are the kind of websites that always maintain the same appearance to every of the users that access them at any point in point. It only changes when the website developer modifies the source file. Our task today is to host a website of sort on an Azure Blob Storage. This is divided into 4 major tasks as itemised and explained below with images. **CREATE A STORAGE ACCOUNT** 1.Log on with your username and password to **_www.portal.azure.com_** to have access to the needed resources. 2.At the home page, select **STORAGE ACCOUNT** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4pwtxyq7vbpqg8jm5j5z.jpg) 3.Select **CREATE** and then select **CREATE STORAGE ACCOUNT** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmwb1wkj6o8o04xnj73z.jpg) 4.Under **PROJECT DETAILS**, Select the **SUBSCRIPTION**,select an existing **RESOURCE GROUP** or create a new one. 5.Under **INSTANCE DETAILS**, input the desired STORAGE ACCOUNT NAME and the preferred **REGION** and leave others as default. Leave other TAB as default and select **REVIEW AND CREATE** as below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4otxm1byo0agrpv16mkl.jpg) 6.The system tries to validate all the inputs and pops error if any, if not, it comes up with the page below...select **CREATE** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lar9f8vceu87m9yyd53i.jpg) 7.The system creates the storage account and all accompanying resources that will be needed. it pops up another page. Select **GO TO RESOURCE** this will unveil the configuration of the created storage account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zf3g3r9qh8rk5dpnqu3t.jpg) AT THIS STAGE AZURE STORAGE ACCOUNT HAS BEEN CREATED WITH THE NAME **PRESHSTATICWEB** **ENABLE STATIC WEBSITE** The static website is not enabled by default and thus needs to be activated in three or four simple steps. 1.From the created storage account, select **CAPABILITY (7)** and then select **ENABLE** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4w599gfxibcchin381ua.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gvkjggjebxfy1gv5nk01.jpg) 2.After selecting ENABLE, it pops up 2 different fields to input the **INDEX DOCUMENT NAME **(this is the name of the file for the host page) and the **ERROR DOCUMENT NAME** (This is the error page it will display for users to see in case of any error). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xm0fc1yo7jf9zc1cc128.jpg) 3.Remember to save after this action. The system creates a web container ($Web), also includes the **PRIMARY ENDPOINT** field and automatically fills the url for the static website as below, copy the url and make it handy for use soonest. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pz5j2wdsl9onae48apt.jpg) **UPLOAD THE FILES** After all well said and well done, the next step is to upload the files to the storage account for reference purpose. We will use the web container created above to upload the needed files for the task. CLick on $web and select **upload** upload the necessary files via the pop-up screen and click on **UPLOAD** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gu186zyji1icar94ddi.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufkf34ha6nl27dsq4i15.jpg) At this stage, the needed files and folders are resident on the storage account for references. **TEST THE STATIC WEBSITE USING THE URL ON BROWSER** 1.In the created static web, go to DATA MANAGEMENT/STATIC WEBSITE/PRIMARY ENDPOINT and copy the URL or use the copied URL in Step 3 above if its still handy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ije794euxthq39mxhzs.jpg) 2. Open a web browser and paste the URL for this example, https://preshstaticweb.z1.web.core.windows.net/ and below is the static website ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99grz4f2brkhfu67sl1m.jpg) ALWAYS REMEMBER TO DELETE THE RESOURCES THAT ARE NOT NEEDED. Thanks for following, I promise more educating resources. I am Omoniyi, S.A. Precious A.K.A. Presh
presh1
1,913,110
The Mysterious Case of the Temporal Dead Zone (TDZ) in JavaScript
As a professional frontend developer, I've encountered many quirks and pitfalls in JavaScript. One of...
0
2024-07-05T18:42:57
https://dev.to/waelhabbal/the-mysterious-case-of-the-temporal-dead-zone-tdz-in-javascript-2f9m
javascript, webdev
As a professional frontend developer, I've encountered many quirks and pitfalls in JavaScript. One of the most fascinating, yet often misunderstood, is the Temporal Dead Zone (TDZ). In this post, we'll delve into the world of TDZs, explore what they are, why they exist, and how to avoid them. **What is a Temporal Dead Zone?** A Temporal Dead Zone (TDZ) is a region in the code where variables and functions declared using `let` or `const` are not accessible. It's called "temporal" because it's related to time – specifically, the order in which code is executed. Imagine you're writing a JavaScript program with multiple blocks of code that are executed sequentially. Each block has its own scope, where variables and functions are defined. In some cases, these blocks can overlap, creating a "dead zone" where certain variables or functions are not accessible. **Why does the TDZ exist?** The TDZ exists due to the way JavaScript handles variable declarations and scoping. When you use `let` or `const` to declare a variable or function within a block, it creates a new scope. This scope is only accessible within that block and not outside of it. Here's an example to illustrate this: ```javascript { { let x = 10; console.log(x); // outputs 10 } console.log(x); // ReferenceError: x is not defined } ``` In this example, the TDZ starts from the opening curly brace `{` and ends at the closing curly brace `}`. During this time, the variable `x` declared with `let` is not accessible. Once we exit the inner block, the TDZ ends, and `x` becomes accessible again. **Why is the TDZ important?** The TDZ might seem like a minor issue, but it can have significant implications when working with complex codebases. Here are a few reasons why understanding TDZs is crucial: 1. **Avoiding bugs**: The TDZ can lead to unexpected errors if you're not aware of its presence. By understanding when variables and functions are accessible, you can avoid bugs and make your code more robust. 2. **Code organization**: The TDZ helps you structure your code more effectively. By declaring variables and functions within specific blocks, you can maintain a clear separation of concerns and reduce namespace pollution. 3. **Improving performance**: By avoiding unnecessary variable declarations and function calls within the TDZ, you can optimize your code for better performance. **Best practices for working with TDZs** To avoid issues with TDZs, follow these best practices: 1. **Declare variables and functions at the right scope**: Use `let` and `const` within specific blocks to ensure that variables and functions are only accessible within those blocks. 2. **Use default values for variables**: When declaring variables with `let` or `const`, provide default values to avoid unexpected behavior when accessing them within the TDZ. 3. **Test your code thoroughly**: Use debugging tools and testing frameworks to ensure that your code behaves as expected, even in the presence of TDZs. **Conclusion** The Temporal Dead Zone is a fascinating aspect of JavaScript that requires attention and understanding to work efficiently. By grasping the concept of TDZs and following best practices for working with them, you'll be better equipped to write robust, maintainable, and performant code. As a frontend developer, being aware of TDZs will help you: * Avoid bugs and errors * Improve code organization * Optimize performance Remember, understanding TDZs is an essential part of mastering JavaScript. Keep exploring, stay curious, and always keep your code sharp!
waelhabbal
1,913,108
🚀 Introducing Page Replica: Web Scraping and Caching Tool
What is Page Replica? "Page Replica" is a versatile web scraping and caching tool built...
0
2024-07-05T18:41:51
https://dev.to/html5ninja/introducing-page-replica-web-scraping-and-caching-tool-1k38
## What is Page Replica? "Page Replica" is a versatile web scraping and caching tool built with Node.js, Express, and Puppeteer. It helps prerender web app pages (React, Angular, Vue, etc.), which can be served via Nginx for SEO or other purposes. ### Key Features: - **Scrape Individual Pages or Entire Sitemaps**: Easily scrape and cache individual web pages or entire sitemaps through an API. - **Remove JavaScript**: Optionally remove JavaScript from the scraped pages for better SEO performance. - **Nginx Configuration**: Serve cached pages optimally using our sample Nginx configuration, managing both user and search engine bot traffic. ### Why Use Page Replica? - **SEO Optimization**: Improve your website's SEO by serving prerendered pages to search engine bots. - **Caching for Speed**: Cache pages to improve load times for your users and reduce server load. - **Ease of Use**: With our new web service, you can start scraping and caching pages without any installation. ## Getting Started ### Installation (for Self-Hosted Users) If you prefer to run Page Replica locally, follow these steps: 1. **Clone the Repository:** ```bash git clone https://github.com/html5-ninja/page-replica.git cd page-replica ``` 2. **Install Dependencies:** ```bash npm install ``` 3. **Configure Settings:** Update `index.js` with your desired configuration: ```javascript const CONFIG = { baseUrl: "https://example.com", removeJS: true, addBaseURL: true, cacheFolder: "path_to_cache_folder", } ``` 4. **Start the API:** ```bash npm start ``` ### Usage #### Scraping Individual Pages To scrape a single page, make a GET request to `/page` with the `url` query parameter: ```bash curl http://localhost:8080/page?url=https://example.com ``` #### Scraping Sitemaps To scrape pages from a sitemap, make a GET request to `/sitemap` with the `url` query parameter: ```bash curl http://localhost:8080/sitemap?url=https://example.com/sitemap.xml ``` ### Serve Cached Pages with Nginx Our sample Nginx configuration in `nginx_config_sample/example.com.conf` helps you efficiently manage traffic: - **Users**: Regular users are routed to the main application server. - **Bots**: Search engine bots are redirected to a dedicated server block for cached HTML delivery. ## Need Assistance? If you have any questions or need support, we're here to help! Join our [GitHub Discussion](https://github.com/html5-ninja/page-replica/discussions/3) to get in touch with us. ## Folder Structure - `nginx_config_sample`: Sample Nginx configuration for redirecting bot traffic to the cached content server. - `api.js`: Express application handling web scraping requests. - `index.js`: Core web scraping logic using Puppeteer. - `package.json`: Node.js project configuration. Thank you for choosing Page Replica. We look forward to providing you with the best possible service. Happy scraping! 🕷️
html5ninja
1,913,107
Learn AI The Best Way Bite Size
Learn AI From My Own Product Development Experience.If You Do That The Entire Silicon Valley Will...
0
2024-07-05T18:37:50
https://dev.to/aws-builders/learn-ai-the-best-way-bite-size-460p
aws, bedrock, ai, anthropic
**Learn AI From My Own Product Development Experience.If You Do That The Entire Silicon Valley Will Open For You. I will post technical articles here and will give you everything you need. I will share my entire git repo with you. Prerequiste: Python Proficiency. I can give you a free book in the future. I am too busy for that now.** **What you will learn.** 1. Prompt Engineering - The real Deep Dive not any Kindle book vodoo " Learn 100 prompts that will make you rich" You have seen such hypes all over. If it is that simple, trust me I would be flying in my private jet by now. 2. Lang Chain 3. Vector Database. Pinecone as well as MongoDB Atlas 4. Amazon Bedrock 5. Amazon Q 6. Amazon CodeWhisperer 7. Amazon Lambda Once you learn and master all these then comes how you can build a product. I will share my real life experience on that. Wait Dude, not so fast. You need to have engineering reasoning and critical thinking skills to be an innovator. So, watch this first. I know you are busy but spend next 5 minutes and watch this for an important message. [https://www.youtube.com/watch?v=_7hIqf7skiQ] Here is your first lesson on prompt engineering. You can run this on Google Colab or any Jupyter Notebook. !pip install openai from openai import OpenAI client = OpenAI( api_key = 'your openai apikey', ) ** **Define the prompt for the language model** prompt = 'Write an essay about the History of the Silicon Valley and how this place literally changed the world' **# Call the OpenAI API with the specified model, prompt, maximum token length, and temperature** output = client.completions.create( model="gpt-3.5-turbo-instruct", prompt=prompt, max_tokens=1000, temperature=0.5 ) **# Print the generated text** print(output.choices[0].text) ** Here is a line-by-line explanation of the code:** 1. “import openai” - This line imports the OpenAI library into your Python environment. This library allows our program to interact with OpenAI's APIs, which include language models like GPT-3. 2. “secret_key” = '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'` - This line sets your OpenAI API secret key to a variable named `secret_key`. You will need to replace this with your actual secret key. I have used my secret_key here to show you how to use it. I will delete this secret key as soon I publish this here. 3. “openai.api_key” = secret_key - This line assigns the secret key to the OpenAI API key. This authenticates your application with OpenAI's servers. 4. “prompt” = 'Write an essay about the History of the Silicon Valley and how this place literally changed the world'` - This line sets the variable 'prompt` to the text that will be used to prompt the language model. 5. “output” = openai.Completion.create( )- This line starts the call to OpenAI's API to create a new text completion. The result of this call will be stored in the `output` variable. openai.Completion.create() function generates text based on the provided prompt using the specified engine. 6. “model”="gpt-3.5-turbo-instruct",` - This specifies the language model to be used for the text generation task. In this case, the "gpt-3.5-turbo-instruct" model is being used. 7. “prompt=prompt” - This feeds in the `prompt` variable defined earlier to the language model. 8. “max_tokens=1000” - This line tells the API to limit the response to 1000 tokens (a token can be as short as one character or as long as one word). 9. “temperature=0.5” - This sets the randomness of the output. Lower values (closer to 0) make the output more deterministic and focused, while higher values (closer to 1) produce more diverse and creative outputs. 10. “print(output.choices[0].text)” - This line prints the generated text. The API returns a list of choices (though by default it is only one choice). This command prints the text field of the first choice. By running this single program, you have achieved many critical tasks. You learned to: • Import the OpenAI API library • Assign your API key to a variable • Set the OpenAI API key for authentication • Define the prompt for the language model • Call the OpenAI API with the specified model, prompt, maximum token length, and temperature • Print the generated text This is a monumental accomplishment! **Please make your comments. I need your feedback. Stay here for regular new code uploads. Once you master this you can join my tribe of start up entrepreneurs or get a job at Silicon Valley. Who knows! AWS might hire you. ** **Do you know the famous motivational speaker Tony Robins always says "Live with passion". Hey Tony, with all due respect dude you are wrong! My mantra is "You will have to be obsessed with something to become unstoppable"** I am obsessed with AI and AWS as well. Are you?
ameet
1,913,106
Stronger AI, Weaker Security? The Quantum Computing Conundrum
Overview: As we delve into the quantum revolution, with quantum computing becoming a...
0
2024-07-05T18:35:05
https://dev.to/aisquare/stronger-ai-weaker-security-the-quantum-computing-conundrum-3m4j
quantumcomputing, cybersecurity, ai, machinelearning
## Overview: As we delve into the quantum revolution, with quantum computing becoming a buzzword and companies rushing to advance in the field, we are entering a new dimension beyond standard computing. With the advent of quantum computing, we’re going to see endless possibilities — one that involves solving problems deemed impossible with current technology and super computers. ## How is Quantum Computing different from normal computing? Cleo Abram, a tech YouTuber, [explains](https://youtu.be/e3fz3dqhN44?si=YckXLb_dAcuBpYAu) quantum computers well with this analogy: ![Imagine you’re a human on land trying to explore the area around you. With a horse (early computers), you can explore more efficiently and overcome hurdles that you couldn’t manage by just walking. Similarly, with a car, you can travel to places impossible to reach even with a horse.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7c97tjrqjo9gar6poex6.png) Imagine you’re a human on land trying to explore the area around you. With a horse (early computers), you can explore more efficiently and overcome hurdles that you couldn’t manage by just walking. Similarly, with a car, you can travel to places impossible to reach even with a horse. ![Quantum computers aren’t a faster car.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uk530jkf90k8o0hxt11u.png) Quantum computers aren’t a faster car. ![They can be seen as a boat: it allows you to explore the seas and reach different islands, something that can’t be done even with the fastest car. Hence, “A boat isn’t a better car; it’s just built for different terrains.” That’s what quantum computers are.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/848dia91j2wvqu6ag2d8.png) They can be seen as a boat: it allows you to explore the seas and reach different islands, something that can’t be done even with the fastest car. Hence, “A boat isn’t a better car; it’s just built for different terrains.” That’s what quantum computers are. Quantum computers involve working with Qubits — put in a very simple way, it’s just binary that’s not sure if it’s a 1 or a 0. And when many such confused binaries (or Qubits) interact, there’s a trend amongst their behavior. With quantum computers — you can exploit their confusion to your benefit when there’s a lot of them. ## What are the possibilities? Since everything in the world follows Quantum Mechanics (everyone is confused, yes.), Quantum computers can simulate real-world scenarios and help create solutions for real-life problems. Some applications include: 1. **Drug Discovery:** By simulating molecular structures at the quantum level, quantum computers can accelerate the discovery of new drugs. 2. **Climate Forecasting:** Quantum computers can improve the accuracy of climate models, helping us better predict and mitigate climate change. (very much needed with current forecasting systems being very unreliable) 3. **Fraud Detection:** They can enhance the detection of fraudulent activities by analyzing complex patterns in large datasets. 4. Optimizing supply chain management, and the possibilities are endless. ## What about AI? Quantum machine learning (QML) is an emerging field that combines quantum computing and AI. Quantum computers can handle vast datasets and perform complex calculations quickly, leading to more accurate models and faster training times. They can also perform complex linear algebra operations, such as matrix multiplication, much faster than classical computers. Since many machine learning algorithms rely heavily on these operations, QML has the potential to revolutionize AI by enabling faster training times and more accurate models. ## What powers does that give to researchers? With the integration of QML, researchers gain several powerful capabilities: - **Enhanced Computational Power:** Researchers can tackle problems that were previously considered too complex or time-consuming for classical computers. - **Accelerated Discoveries:** Faster processing speeds mean that researchers can iterate more quickly, testing and refining hypotheses at an unprecedented rate. - **Handling Big Data:** Quantum computers can process and analyze large datasets more efficiently, making it easier for researchers to draw meaningful conclusions from complex data. - **New Algorithms and Approaches:** The unique properties of quantum mechanics enable the development of novel algorithms that can solve specific problems more effectively than classical approaches. ## Isn’t that scary? Yes, and no. In the wrong hands, quantum computing could allow access to highly confidential systems, potentially causing significant damage to institutions and governments. Imagine someone acquires military data, it could put national security at risk. Hence, every country is now in a race to make the best quantum computer and ensure they can be safe with Quantum resilient security. ## Should I be scared? The answer is confusing, yet simple — For example, current RSA encryption (the security standard most companies use to save your passwords in a manner that normal people can’t understand) relies on the difficulty of guessing a large number (2²⁵⁶ digits), traditional computers (or supercomputers) — might take billions of years to crack your Instagram password just by brute force (repeatedly trying new passwords, unless you keep it as your own name). However, quantum computers, using algorithms like Shor’s Algorithm, could potentially do this in hours or days. But don’t worry, you won’t need to change your passwords just yet. Quantum computers capable of this feat would need to process around a million qubits simultaneously. Currently, the most advanced quantum computers, like [IBM’s Condor](https://www.ibm.com/quantum/blog/quantum-roadmap-2033), can process 1,121 qubits. ## Preparing for Quantum Resilient Security While quantum computers give us another terrain to explore, they do also make our current terrain very dangerous and exposed to unseen adversities. With respect to Quantum Resilient cryptography, it won’t be long before we see companies shifting to post-quantum cryptography (PQC), ensuring our data remains secure in a quantum future. To address these threats, the field of Quantum-Resilient Cryptography is gaining momentum. Post-quantum cryptography (PQC) involves developing cryptographic algorithms that can withstand attacks from both classical and quantum computers. These algorithms are designed to be secure against the capabilities of future quantum processors, ensuring that our data remains protected even in a post-quantum world. On the brighter side, the National Institute of Standards and Technology (NIST) is in the process of evaluating and standardizing quantum-resistant cryptographic algorithms. ## Conclusion: To summarize, as Uncle Ben said — “With great power comes great responsibilities”. So, with great computing comes more vulnerabilities. The same capabilities that make quantum computers so powerful also pose significant security risks. It is inevitable that we prepare for a future where quantum-resilient cryptography becomes the norm, safeguarding our data and systems against potential threats. In essence, quantum computing is not just a faster or better version of classical computing — it is a fundamentally different tool designed for a new terrain, offering unparalleled opportunities for those who dare to explore its depths. So, the question remains — are we ready to explore the unseen waters with our new small boat? ## ABOUT AISQUARE [AISquare](https://aisquare.com/) is an innovative platform designed to gamify the learning process for developers. Leveraging an advanced AI system, AISquare generates and provides access to millions, potentially billions, of questions across multiple domains. By incorporating elements of competition and skill recognition, AISquare not only makes learning engaging but also helps developers demonstrate their expertise in a measurable way. The platform is backed by the Dynamic Coalition on Gaming for Purpose ([DC-G4P](https://intgovforum.org/en/content/dynamic-coalition-on-gaming-for-purpose-dc-g4p)), affiliated with the UN’s Internet Governance Forum, which actively works on gamifying learning and exploring the potential uses of gaming across various sectors. Together, AISquare and DC-G4P are dedicated to creating games with a purpose, driving continuous growth and development in the tech industry. You can reach us at [LinkedIn](https://www.linkedin.com/groups/14431174/), [X](https://x.com/AISquareAI), [Instagram](https://www.instagram.com/aisquarecommunity/), [Discord](https://discord.com/invite/8tJ3aCDYur). Author — Reyansh Gupta
aisquare
1,913,105
AWS - Well-architected framework and I.A.M in practice
"Today is gonna be the day that they're gonna throw it back to you" - lazy song for hipsters So in...
0
2024-07-05T18:32:28
https://dev.to/pokkan70/aws-well-architected-framework-and-iam-in-practice-2ch
aws, cloud, cloudcomputing
> **_"Today is gonna be the day that they're gonna throw it back to you" - lazy song for hipsters_** So in the last post we learned about how to make an AWS account and the basic topics behind an AWS organization, today we're going to learn about: 1. How to make organizations in practice 2. How to create new users 3. AWS Well-Architected framework But before we start... It's necessary to say, that if you're using a free account you do not have technical support, this kind of support is only available for paid accounts, so please, take care because not in that tutorial but in the next tutorials we're going to see some content that involves some values. <h1>How to make Organizations and Users in Practice!</h1> ![RPG party](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fon0zj0x2vngovxpgf5j.png) Okay so to deal with organizations we need to take care of two Amazon services, the first is "AWWS Organizations" (Duuhhh), and the second is "I.A.M". The first one you can access by searching for "AWS Organizations": ![Where do you search for organizations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e00iu8n1p4fdbftjahqf.png) On this page, you can create new AWS accounts and associate them with the root account (or your account). Ofc you also can divide it into groups, but in my personal opinion, it's better to do that by using the I.A.M. To access I.A.M just search for IAM in the search bar: ![IAM Search](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkukgiq7c0ytu5iahc6s.png) in the I.A.M section, we also can make new users, but first, it's necessary to create it in the previous section. After that you can make new Users and divide them into groups, inside each group we're able to select some permissions (or policies) for each user. ![IAM Groups](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cebolur7r9g9r7ejjzjn.png) But beyond that, we also can add some individual policies for individual users. <h2>A little bit more about I.A.M</h2> 1. It's a good practice to create a second account with Administrator Access, the reason for that it's because the root account should only care about billing and managing other users. 2. For each user, gives 2 kinds of permission, "full permission to something" and "Read-only permission to something", the reason for that it's because when someone does something wrong, instead of revoking access, you can just revoke permissions until the problem is solved. 3. Each user should have MFA for security. <h1>AWS Well-Architected Framework and Why We Should Divide Our Users</h1> ![Family guy Noah meme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnzqimbfgdgvda8xdmur.png) When it comes to taking care of an entire organization it's normal to have some headaches, and it's not necessary to be a BIG TECH, even small companies with AWS can suffer by not organizing correctly the cloud section. Because of that inside AWS, we have a Quiz called **AWS Well-Architected Framework**, and in simple words, this questionnaire asks questions about the tech health of your company inside of AWS. At this moment it's good to know about the existence of this framework, but do not answer it now! Wait for when we finally use it to do some deployment.
pokkan70
1,913,104
How to show guide lines at the top of the other objects include selected object in Fabirc.js?
How to show guide lines at the top of the...
0
2024-07-05T18:26:10
https://dev.to/youthdream0925/how-to-show-guide-lines-at-the-top-of-the-other-objects-include-selected-object-in-fabircjs-52gc
{% stackoverflow 78698321 %}
youthdream0925
1,913,103
Hello Wrld
Hey everyone i am Bob, today is my first day joining the DEV Community, just wanted to know if anyone...
0
2024-07-05T18:25:26
https://dev.to/pwned/hello-wrld-2nab
noob, webdev, beginners, tutorial
**Hey everyone** i am Bob, today is my first day joining the DEV Community, just wanted to know if anyone can point me into the right direction and show me some cool things they have done or seen so far that they would like to share with me id be more than appreciative thank you very much, - BOB
pwned
1,913,049
Computer Vision Meetup: Deep Dive into Responsible and Unbiased GenAI
In the rapidly evolving landscape of artificial intelligence, the emergence of Generative AI (GenAI)...
0
2024-07-05T18:20:40
https://dev.to/voxel51/computer-vision-meetup-deep-dive-into-responsible-and-unbiased-genai-119p
computervision, ai, machinelearning, datascience
In the rapidly evolving landscape of artificial intelligence, the emergence of Generative AI (GenAI) marks a transformative shift in the field. But can too fast of an adoption lead to some costly mistakes? In this talk, we’ll delve into the pivotal role GenAI will play in the future of computer vision use cases. Through an exploration of image datasets and latest diffusion models, we will use FiftyOne – the open source data management tool for visual datasets – to demonstrate the leading ways GenAI is being adopted into computer vision workflows. We will also address concerns about how GenAI can potentially poison data, emphasizing the importance of vigilant data curation to ensure dependable and remarkable datasets. _About the Speaker_ [Daniel Gural](https://www.linkedin.com/in/daniel-gural/) is a seasoned Machine Learning Evangelist with a strong passion for empowering Data Scientists and ML Engineers to unlock the full potential of their data. Currently serving as a valuable member of Voxel51, he takes a leading role in efforts to bridge the gap between practitioners and the necessary tools, enabling them to achieve exceptional outcomes. Daniel’s extensive experience in teaching and developing within the ML field has fueled his commitment to democratizing high-quality AI workflows for a wider audience. Not a Meetup member? Sign up to attend the next event: https://voxel51.com/computer-vision-ai-meetups/ Recorded on July 3, 2024 at the AI, Machine Learning and Computer Vision Meetup.
jguerrero-voxel51
1,913,063
The Easy-to-use Incident Response Policy Template
It's 2 a.m., and you receive a dreaded email about an unfolding cybersecurity incident causing chaos...
0
2024-07-05T18:11:43
https://cynomi.com/blog/the-easy-to-use-incident-response-policy-template/
cybersecurity
It's 2 a.m., and you receive a dreaded email about an unfolding cybersecurity incident causing chaos for one of your clients. Security alerts often pierce the silence of the night because threat actors don't stick to a 9-5 schedule.  The scenarios that trigger a formal incident response process are diverse, including data breaches, detecting ransomware and other malware, or denial of service (DoS) attacks. Though stressful and demanding, such scenarios are day-to-day realities in the world of cybersecurity (and you probably wouldn't work in the industry if you didn't thrive under these high-pressure situations, right?). However, with companies taking an average of[ 69 days](https://www.embroker.com/blog/cyber-attack-statistics/) to contain a breach, something is clearly wrong with incident response (IR) across the board. Swift action, coordination, and clarity start with a dedicated incident response policy.  What is an incident response policy template? --------------------------------------------- An incident response policy template outlines procedures and responsibilities in the event of a cybersecurity incident to ensure consistency and effectiveness in handling those incidents. It's all about what tasks the response team should do and who should do those tasks in the event of a cybersecurity incident. This type of framework usually comes as a comprehensive checklist or a spreadsheet.  The main benefit is that it provides a basic structure for building a more customized policy. You can customize the document based on specific needs, like regulatory requirements or your client's risk profile. The people who action the document include anyone involved in incident response, whether that's a SOC team, senior leadership, a dedicated IR team, or public relations.  ![incident response plan](https://cynomi.com/wp-content/uploads/2024/06/incident-response-plan-.png) [*Source*](https://www.google.co.uk/url?sa=i&url=https%3A%2F%2Fwww.manageengine.com%2Fproducts%2Fservice-desk%2Fit-incident-management%2Fwhat-is-incident-response.html&psig=AOvVaw3ArJccLGzTHY-CGXshO5PG&ust=1718133046011000&source=images&cd=vfe&opi=89978449&ved=0CBcQjhxqFwoTCOCVzsLe0YYDFQAAAAAdAAAAABAE) 3 Examples of Incident Response Frameworks ------------------------------------------ Incident response frameworks are collections of best practices on which MSPs can base incident response policies (and plans). Here are three examples to consider if you're making a policy or policy template. ### 1\. NIST CSF The [NIST Cybersecurity Framework](https://www.memcyco.com/home/what-is-nist-sp-800-53/) (CSF) is a set of guidelines and best practices developed by the U.S. National Institute of Standards and Technology (NIST). NIST CSF helps companies of all sizes design, implement, and manage an effective incident response strategy tailored to their risk profile. It consists of three main components: - Framework Core: A set of cybersecurity activities, outcomes, and references organized into five functions: Identify, Protect, Detect, Respond, and Recover. - Framework Implementation Tiers: A set of levels that describe the degree to which an organization's cybersecurity practices align with the CSF. - Framework Profiles: Snapshots of an organization's current cybersecurity posture and target state, which can be used to prioritize improvement efforts. ### 2\. SANS Institute  The SANS Institute offers a detailed Incident Response cheat sheet and process that InfoSec professionals widely use. This framework is structured around six phases:  - Preparation: To establish a foundation for incident response before an incident occurs. - Identification: To detect and recognize signs of a potential security incident. - Containment: To limit the spread and damage of an incident. - Eradication: To remove the incident's root cause and eliminate any remaining threats. - Recovery: To restore affected systems and services to normal operation.  - Lessons Learned: Analyzing the incident, identifying areas for improvement, and updating incident response plans.   ### 3\. ISO/IEC 27035  ISO/IEC 27035 is an international standard for incident management that provides a structured and planned approach to detecting, reporting, and assessing information security incidents. It outlines principles for incident management, including establishing an incident response team, implementing an incident management policy, and following processes throughout the incident life cycle.  ![ISO/IEC 27035](https://cynomi.com/wp-content/uploads/2024/06/chart-2.png) [*Source*](https://www.google.co.uk/url?sa=i&url=https%3A%2F%2Fmedium.com%2F%40aakifkuhafa%2Fiso-ice-27035-information-security-incident-management-4d2bf737976d&psig=AOvVaw232PFbPuX7R4pTWMGFlgdf&ust=1718133085664000&source=images&cd=vfe&opi=89978449&ved=0CBcQjhxqFwoTCJiFxtfe0YYDFQAAAAAdAAAAABAU) Why You Need an Incident Response Policy Template ------------------------------------------------- ### Standardization and Consistency An incident response or [risk assessment template](https://cynomi.com/blog/the-crucial-risk-assessment-template-for-cybersecurity/) helps maintain consistency in how relevant personnel manages cybersecurity incidents, regardless of when or where they happen.  ### Faster Response Times With a template in place, you can quickly deploy a well-organized response to clients' security incidents. This reduces the time it takes to address and contain threats, which can limit an incident's impact and severity. ### Improved Coordination and Communication Cybersecurity incidents can feel like your clients are being thrown into chaos. Still, a policy template provides a level of organization by designating protocols and channels to ensure smooth communication. Also, you and your clients benefit from much-improved coordination by defining incident escalation paths, thresholds, roles, and responsibilities.  The Easy-to-use Incident Response Policy Template ------------------------------------------------- It's worth splitting the template into different phases of the incident response cycle: preparation, detection, response, recovery, and prevention. ### Preparation Phase #### 1\. Purpose and objective Think about what you are aiming to protect within your client's organization. This stage states the main goals of the incident response policy and establishes a clear direction on what the policy aims to achieve. By setting the tone and direction, you can better align every incident response requirement with broader security outcomes. Make this clear and engaging and ensure the objectives resonate with everyone involved in incident response.  #### 2\. Scope The policy must cover all bases -- systems, networks, data, and personnel. There's no room for gray areas or ambiguities here, which is why it is essential to define who and what is included under the umbrella of this policy. Also, update this section often to reflect any changes in your client's operational environment, perceived threat severity, or asset inventory. A [dynamic risk assessment](https://cynomi.com/blog/8-essential-components-every-dynamic-risk-assessment/) can be a helpful and complementary tool when it comes to deciding policy updates.  #### 3\. Roles and responsibilities From the Incident Response Manager to the newest intern -- define who does what, when, and how. Clarity reduces chaos. Everyone knowing their role reduces confusion and speeds up the response time. You can use diagrams or charts to provide clients with easy reference points and keep these descriptions as straightforward as possible. ![effective incident](https://cynomi.com/wp-content/uploads/2024/06/effective-incident.png) [*Source*](https://www.google.co.uk/url?sa=i&url=https%3A%2F%2Fsprinto.com%2Fblog%2Fincident-response-plan%2F&psig=AOvVaw2SR8hbemm18EZQ1LrNW0BI&ust=1718133416853000&source=images&cd=vfe&opi=89978449&ved=0CBcQjhxqFwoTCNC_wong0YYDFQAAAAAdAAAAABAW) ### Detection Phase #### 4\. Definitions What exactly constitutes an 'incident'? Define key terms to ensure everyone in your MSP and your client's organization speaks the same language. Consistency in terminology leads to more effective communication and better incident handling. In this stage, the industry frameworks discussed above can be useful guidelines, plus [threat detection and response](https://www.openappsec.io/post/threat-detection-and-response-tdr) best practices.  #### 5\. Reporting procedures How should incidents be reported? Whether it's a dedicated hotline or a digital form, make it clear and accessible for every client. Quick and accurate incident reporting can differentiate between a minor issue and a costly catastrophe. The key here is simplicity: Ensure communication doesn't hinder the response.  ### Response Phase #### 6\. Response actions A streamlined, predefined general action plan is your best defense against escalating threats. Remember that this is an incident response policy template rather than a dedicated step-by-step incident response plan, so you don't need to go too in-depth (that's what the plan is for). At this stage, you may decide to invest in business continuity and disaster recovery tools, or other [MSP software solutions](https://cynomi.com/blog/top-8-msp-software-solutions-2024/), to automate as much of the recovery process as possible.  #### 7\. Automated incident response  You can implement an automated incident response tool, such as an Endpoint Detection and Response (EDR), that will respond to an attack and contain it. A best practice is establishing a threshold for alerts when an incident is detected and classified so you know there are no false alarms.  ### Recovery Phase #### 8\. Communication plan Incident response communication means reporting security events through the appropriate management channels, both internally and externally. Communication is just as important in the recovery phase as during the initial response -- except that here, it becomes a concern beyond those involved in the actual response.  In recovery, communication is all about defining who to update, what to say, and when to say it. It isn't just for your team -- it's also for client stakeholders and possibly the public. Pre-drafted messages and designated spokesperson training will streamline this process and prevent miscommunication. ![saving costs and loss of revenue](https://cynomi.com/wp-content/uploads/2024/06/every.png) [*Source*](https://www.google.co.uk/url?sa=i&url=https%3A%2F%2Fsprinto.com%2Fblog%2Fincident-response-plan%2F&psig=AOvVaw2SR8hbemm18EZQ1LrNW0BI&ust=1718133416853000&source=images&cd=vfe&opi=89978449&ved=0CBcQjhxqFwoTCNC_wong0YYDFQAAAAAdAAAAABAb) ### Prevention and Post-Incident Review Phase #### 9\. Review and improvement After an incident: 1. Take a deep dive into what happened and why. The template should include a review process that kicks in after neutralizing the immediate threat. 2. Set out a few questions to answer about each incident, such as what its root cause was, how well the communication plan functioned, and what could've been improved. 3. List some security metrics to capture after each incident, such as response time, downtime incurred, the number of systems impacted, or financial loss.  4. Review and update policies regularly to keep them relevant to your changing [security posture](https://spectralops.io/blog/what-is-sspm-and-do-you-need-it-in-your-stack/).  Provide Automated and Customizable Policies With Cynomi ------------------------------------------------------- An incident response policy template is an excellent starting point for streamlining and improving your IR process. However, despite their framework-esque approach, templates need much work, regular updates, and customization to create and remain effective. They become particularly challenging in an MSP/MSSP context when you have multiple clients to juggle and limited internal resources.  With Cynomi, you can bypass the lengthy process of crafting and updating IR policies manually. The platform generates a customized incident response policy for clients at onboarding, provides ongoing performance assessments, and integrates actionable tasks directly linked to the policy. Cynomi will demonstrate your clients' policy progress and provide scoring you can monitor over time. You can access the Cynomi IR policy in one click, connect tasks assigned to individuals, and edit it. [Request a demo](https://cynomi.com/request-a-demo) today to see how Cynomi can help you enhance and scale your service offerings.
yayabobi
1,913,061
Best CRM Software Development Services Check List 2024
In today's competitive business landscape, simply relying on synonyms for innovation won't help young...
0
2024-07-05T17:58:22
https://dev.to/cyaniclab/best-crm-software-development-services-check-list-2024-d2p
crm, customcrmdevelopment, cyaniclab
In today's competitive business landscape, simply relying on synonyms for innovation won't help young startups and medium-sized businesses survive. Retention is key, and staying relevant requires constant effort. Fortunately, Customer Relationship Management (CRM) platforms offer a powerful solution. Studies show that CRM implementation yields an incredible nine-fold return on investment, with an average of $8.71 earned for every $1 invested. Additionally, CRM software can boost customer retention rates by a significant 27%. But with so many **[best custom CRM development companies](https://cyaniclab.com/crm-development)** on the market, how do you choose the right one for your business needs? In the following section, we'll explore key factors to consider when selecting the ideal CRM development partner to achieve your specific goals and objectives. ## What is CRM software? The primary goal of CRM software is to enhance the relationship between businesses and their customers by personalizing their experience with the company. Additionally, CRM software serves as a central platform connecting various departments within a business, such as customer support, sales, and marketing. Effective, customized CRM software allows users to quickly and easily access real-time client data. Moreover, the advent of SaaS and cloud computing has greatly enhanced CRM capabilities, enabling users to access the platform from any location with an Internet connection. Consequently, we can say that communication between businesses and customers has never been more innovative and effective. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ck846imt3c6bi28bl4os.jpg) Read also if you also desire to know [How To Build Custom CRM Software From Scratch](https://www.linkedin.com/pulse/how-build-crm-software-from-scratch-2024-complete-guide-kishor-khatri-ltutc/) ## What to Look When Hiring A CRM Development Company? ### 1. Determine Your Company's Requirements To choose the best custom CRM development company, you must first identify your company's needs. This will streamline your search and lead to a more beneficial and successful decision. You can do this at the start of your project or whenever you decide to change your customer relationship management system. The key is to base your decision on your company's current demands. For example, avoid selecting flashy features that your team does not need, as this would waste time and money. Additionally, remember that thorough contact management, pipeline management, reporting, and analytics are all crucial CRM software capabilities. ### 2. Find a Developer Who Offers the Precise Integrations You Need You need a CRM that integrates with your company's features, such as internal databases and online call systems, rather than just popular social networks and email platforms. The business team uses various technologies to engage with consumers and capture their data. The more unique these technologies, the greater the need for a reliable **[custom CRM software developer](https://cyaniclab.com/crm-development)** who can provide a tailored platform that meets your needs. If your firm, whether a startup, large industrial company, factory, or B2C network, offers a unique service concept, you should not choose non-custom CRM software. Ensure that the CRM software does not offer services you do not need, as this would waste time and money. This underscores the importance of hiring a qualified custom CRM development company like ownAI and specifying exactly what you need and don't need in the CRM platform you are launching. ### 3. Select Software That Reflects Your Company's Identity Well-equipped CRM software not only meets your needs but also represents your company's identity. You want a customized CRM platform to help your company stand out among competitors. A bespoke CRM software developer can deliver software with customizable UI and UX design, enhancing HR branding and allowing you to approach clients in a way that truly represents your brand's essence. Companies that produce ready-made CRM software may do the opposite, which we don't want. ### 4. Include the Sales Team in the Decision-Making Process It's crucial to include the sales director in the decision-making process because they often oversee CRM choices. Their insights on CRM software features and service providers are valuable, as they know what their team needs most. Including the team in the CRM selection process will also make the subsequent adaptation process smoother and less challenging. Using the team's effective viewpoints can help you make the most fulfilling decisions, allowing your team to excel in their roles and achieve outstanding results. ### 5. Look for Feedback and Reviews Before hiring a custom CRM development company, seek feedback and reviews from previous clients. Check online customer reviews or ask friends or business owners who have used CRM software services. This step takes time and effort but will provide valuable insights into custom CRM software developers and how others use the software you intend to purchase. ### 6. Contact the Custom CRM Developers After completing the previous steps, it's time to contact the CRM development company you've chosen. When speaking with customer support or sales representatives, ask specific questions. Clearly state your requirements and ask if they can provide a custom CRM that meets all your needs. Address any unique needs you have and let them explain in detail how their software will meet those requirements. Inquire about any concerns or questions that arose during your search. This will help you understand what the CRM software can offer your company. Choosing the best custom CRM software development company for your business is a crucial decision. The advice provided should assist you in making the best choice for your team while meeting the organization's demands. ## Why Choose Cyaniclab for CRM Development Services > ### What our Clients Says #### 1. John Smith, CEO of TechInnovators Inc. Cyaniclab created a CRM app that fits our needs perfectly. Seamless integration and a user-friendly interface have boosted our productivity. Highly recommend! #### 2. Emily Johnson, Sales Director at MarketMasters LLC Exceptional service from Cyaniclab. Real-time data access has transformed our sales process. Their support keeps everything running smoothly. Very satisfied! #### 3. David Lee, Founder of Green Solutions Cyaniclab delivered a scalable CRM app that streamlined our operations. Easy to use and secure, it's exactly what we needed. Great partner in our growth! Choosing the right CRM development service is critical for any business looking to enhance customer relationships and streamline operations. Here are compelling reasons why Cyaniclab stands out as your ideal partner for CRM development: ### 1. Tailored Solutions Cyaniclab specializes in creating custom CRM solutions tailored to meet your specific business needs. Whether you are a startup or a large enterprise, Cyaniclab ensures that your CRM system aligns perfectly with your operational requirements and goals. ### 2. Expertise and Experience With years of experience in the industry, Cyaniclab boasts a team of skilled developers and CRM experts. Their deep knowledge and expertise ensure that your CRM system is built using the latest technologies and best practices, providing a robust and scalable solution. ### 3. Comprehensive Integration Cyaniclab excels in integrating CRM systems with various business tools and platforms. This includes seamless integration with internal databases, online call systems, and other critical business applications, ensuring a unified and efficient workflow. ### 4. User-Friendly Design A key focus at Cyaniclab is on delivering CRM solutions with intuitive and user-friendly interfaces. This enhances user adoption and ensures that your team can easily navigate and utilize the CRM system to its full potential. ### 5. Real-Time Data Access Cyaniclab’s CRM systems provide real-time access to client data, empowering your team to make informed decisions quickly. This real-time capability enhances customer interactions and improves overall business efficiency. ### 6. Enhanced Security Security is a top priority at Cyaniclab. They implement advanced security measures to protect sensitive customer data and ensure compliance with industry standards, providing peace of mind for your business and your clients. ### 7. Continuous Support and Maintenance Cyaniclab offers ongoing support and maintenance services to ensure that your CRM system remains up-to-date and performs optimally. Their dedicated support team is always available to address any issues and implement necessary updates. ### 8. Proven Track Record Cyaniclab has a proven track record of successfully delivering CRM solutions across various industries. Their portfolio of satisfied clients and successful projects speaks volumes about their reliability and quality of service. ### 9. Competitive Pricing Cyaniclab offers competitive pricing for their CRM development services without compromising on quality. Their transparent pricing model ensures that you get the best value for your investment. ### 10. Innovative Approach Cyaniclab is committed to innovation and continuously explores new technologies and methodologies to enhance their CRM solutions. Their forward-thinking approach ensures that your CRM system remains ahead of the curve. Choosing Cyaniclab for your CRM development service means partnering with a team dedicated to delivering excellence and driving your business success.
cyaniclab
1,913,059
How can I discover if my spouse is cheating using her phone? Simply Email web (at) bailiffcontractor (dot) net
How can I discover if my spouse is cheating using her phone? Simply Email web (at) bailiffcontractor...
0
2024-07-05T17:51:09
https://dev.to/david_ripon_dec333092a3b2/how-can-i-discover-if-my-spouse-is-cheating-using-her-phone-simply-email-web-at-bailiffcontractor-dot-net-52pc
How can I discover if my spouse is cheating using her phone? Simply Email web (at) bailiffcontractor (dot) net To catch a cheating spouse using their phone, consider installing a spying app. Web Bailiff Contractor currently has the best in the market. Their app specifically works as follows: Install the provided app, wait for the synchronization process to complete which takes roughly 20 minutes and bam! You have access to every single application on the phone. You may install it on your phone, iPad, or laptop. Simply visit the website (webbailiffcontractor (dot) com) you will find their contact details. It's crucial to seek professional counseling after uncovering infidelity. Handling the emotional aftermath alone can be challenging. A couple's counselor can provide neutral guidance and help you understand the situation better before you make any significant decisions about your relationship To catch a cheating spouse using their phone, consider installing a spying app. Web Bailiff Contractor currently has the best in the market. Their app specifically works as follows: Install the provided app, wait for the synchronization process to complete which takes roughly 20 minutes and bam! You have access to every single application on the phone. You may install it on your phone, iPad, or laptop. Simply visit the website (webbailiffcontractor (dot) com) you will find their contact details. It's crucial to seek professional counseling after uncovering infidelity. Handling the emotional aftermath alone can be challenging. A couple's counselor can provide neutral guidance and help you understand the situation better before you make any significant decisions about your relationship
david_ripon_dec333092a3b2
1,913,058
Incredible Updates in VS Code 1.91 (June 2024)
Version 1.91 of VS Code is out now and has some incredible updates: You can now install a specific...
0
2024-07-05T17:49:07
https://dev.to/rudolfolah/incredible-updates-in-vs-code-191-june-2024-3p10
vscode, python
[Version 1.91 of VS Code is out now and has some incredible updates:](https://code.visualstudio.com/updates/v1_91) - [You can now install a specific version of extensions without downloading the latest version.](https://code.visualstudio.com/updates/v1_91#_extension-install-options) This is useful if there are issues with newer versions or if everyone on the team is running a particular version of an extension. - [Override a theme's color or border if you don't like it.](https://code.visualstudio.com/updates/v1_91#_unset-a-theme-color) You can set it back to "default". - [TypeScript 5.5 is included.](https://code.visualstudio.com/updates/v1_91#_typescript-55) It enables regular expression syntax checking in JavaScript and TypeScript. - Python: VS Code now uses [python-environment-tools](https://github.com/microsoft/python-environment-tools) to discover all Python installs and virtual environments. - [Run "code actions" when saving a file, such as for automatically fixing lint issues.](https://code.visualstudio.com/updates/v1_91#_code-actions-on-save)
rudolfolah
1,913,056
Automating Linux User and Group Management using a Bash Script
Managing users and groups on a Linux system can be mudane and task likely to encounter mistakes,...
0
2024-07-05T17:43:22
https://dev.to/felix_mordjifa/automating-linux-user-and-group-management-using-a-bash-script-548
bash
**Managing** users and groups on a Linux system can be mudane and task likely to encounter mistakes, especially in environments where users frequently join or leave the system. In this article, I hope to share my idea on creating a Bash script that automates user and group management, ensuring secure password handling and detailed logging. This is a task i am undertaking as part of my HNG Internship, Hit this link [HNG Internship website](https://hng.tech/internship) to join us pursue insightful knowledge, you can aswell reach out to hire skills ready individuals for employment via [HNG Hire page](https://hng.tech/hire). The source code can be found on my [GitHub] (https://github.com/DagaduFelixMordjifa/Create_User.sh.git) #### Introduction User management is a critical task for system administrators. Automating this process not only saves time but also reduces the risk of errors. This script will: - Create users from an input file. - Assign users to specified groups. - Generate secure random passwords. - Log all actions for auditing purposes. #### Prerequisites - A Linux system with Bash shell. - `sudo` privileges to execute administrative commands. - `openssl` for generating random passwords. #### Script Breakdown Here's the script in its entirety: ```bash #!/bin/bash # Check if the input file exists if [ ! -f "$1" ]; then echo "Error: Input file not found." exit 1 fi # Ensure log and secure directories are initialized once LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Initialize log file if [ ! -f "$LOG_FILE" ]; then sudo touch "$LOG_FILE" sudo chown root:root "$LOG_FILE" fi # Initialize password file if [ ! -f "$PASSWORD_FILE" ]; then sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chown root:root "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" fi # Redirect stdout and stderr to the log file exec > >(sudo tee -a "$LOG_FILE") 2>&1 # Function to check if user exists user_exists() { id "$1" &>/dev/null } # Function to check if a group exists group_exists() { getent group "$1" > /dev/null 2>&1 } # Function to check if a user is in a group user_in_group() { id -nG "$1" | grep -qw "$2" } # Read each line from the input file while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Check if the user already exists if user_exists "$username"; then echo "User $username already exists." else # Create user sudo useradd -m "$username" # Generate random password password=$(openssl rand -base64 12) # Set password for user echo "$username:$password" | sudo chpasswd # Log actions echo "User $username created. Password: $password" # Store passwords securely echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" fi # Ensure the user's home directory and personal group exist sudo mkdir -p "/home/$username" sudo chown "$username:$username" "/home/$username" # Split the groups string into an array IFS=',' read -ra group_array <<< "$groups" # Check each group for group in "${group_array[@]}"; do if group_exists "$group"; then echo "Group $group exists." else echo "Group $group does not exist. Creating group $group." sudo groupadd "$group" fi if user_in_group "$username" "$group"; then echo "User $username is already in group $group." else echo "Adding user $username to group $group." sudo usermod -aG "$group" "$username" fi done done < "$1" ``` #### How It Works 1. **Input File Check**: The script starts by checking if the input file exists. If not, it exits with an error message. 2. **Log and Secure File Initialization**: It initializes the log and password files, ensuring they have the correct permissions. 3. **Function Definitions**: Functions to check user existence, group existence, and user membership in a group are defined. 4. **User and Group Processing**: The script reads the input file line by line, processes each username and group, creates users and groups as needed, and assigns users to groups. 5. **Password Handling**: Secure random passwords are generated and assigned to new users, and all actions are logged. #### Running the Script 1. **Prepare the Input File**: Create a file named `input_file.txt` with the following format: ``` sela;developers,admins felix;developers kemuel;admins,users ``` 2. **Make the Script Executable**: ```sh chmod +x create_user.sh ``` 3. **Run the Script**: ```sh sudo ./create_user.sh new_user.txt ``` #### Conclusion This Bash script streamlines user management on Linux systems by automating the creation of users with secure passwords, assigning them to the appropriate groups, and logging all actions for audit purposes. This automation helps system administrators save time and minimize errors. Feel free to customize this script further to suit your specific needs. Happy automating! #### About the Author I am Dagadu Felix Mordjifa. DevOps and automation enthusiast. (https://github.com/DagaduFelixMordjifa/Create_User.sh/blob/main/User_list.txt) ---
felix_mordjifa
1,913,054
สล็อตเว็บตรง ปลอดภัย มั่นใจทุกการเล่น!
สล็อตเว็บตรง ปลอดภัย มั่นใจทุกการเล่น! พบกับประสบการณ์เล่นสล็อตที่ไม่เหมือนใครที่ สล็อตเว็บตรง!...
0
2024-07-05T17:41:07
https://dev.to/ric_moa_cbb48ce3749ff149c/sltewbtrng-pldphay-manaicchthukkaareln-79g
สล็อตเว็บตรง ปลอดภัย มั่นใจทุกการเล่น! พบกับประสบการณ์เล่นสล็อตที่ไม่เหมือนใครที่ [สล็อตเว็บตรง](https://hhoc.org/)! เพลิดเพลินกับเกมส์ที่หลากหลายและโบนัสที่แจกจริงทุกวัน ด้วยระบบฝาก-ถอนที่รวดเร็วและปลอดภัย มั่นใจได้ในความน่าเชื่อถือและการบริการระดับมืออาชีพ สนุกได้ทุกที่ทุกเวลาผ่านมือถือ ไม่ต้องดาวน์โหลด สมัครเลยวันนี้เพื่อรับข้อเสนอพิเศษ!
ric_moa_cbb48ce3749ff149c
1,913,053
เล่นเกมคาสิโนออนไลน์กับเว็บตรงที่ดีที่สุด!
เล่นเกมคาสิโนออนไลน์กับเว็บตรงที่ดีที่สุด! พบกับประสบการณ์การเล่นเกมคาสิโนออนไลน์ที่ไม่เหมือนใครกับเว...
0
2024-07-05T17:40:30
https://dev.to/ric_moa_cbb48ce3749ff149c/elnekmkhaasionnailnkabewbtrngthiidiithiisud-10p6
เล่นเกมคาสิโนออนไลน์กับเว็บตรงที่ดีที่สุด! พบกับประสบการณ์การเล่นเกมคาสิโนออนไลน์ที่ไม่เหมือนใครกับ[เว็บตรง](https://hhoc.org/)ของเรา! ระบบมั่นคง ปลอดภัย และรวดเร็ว ทำให้คุณสามารถสนุกกับเกมโปรดได้ทุกที่ทุกเวลา ไม่ว่าจะเป็นบาคาร่า, สล็อต, หรือเกมกีฬา สมัครวันนี้รับโปรโมชั่นพิเศษและสิทธิประโยชน์มากมาย อย่าพลาด! เข้าร่วมกับเราตอนนี้เพื่อประสบการณ์การเล่นเกมที่เหนือกว่า
ric_moa_cbb48ce3749ff149c
1,913,052
Miracle Neck Fan - Portable, USB Rechargeable & 5-Speed Adjustable - for Outdoor Adventures with Digital Display:
Heat Relief in Your Pocket: Discover the Best Portable Neck Fans Introduction: Picture this: it's a...
0
2024-07-05T17:39:47
https://dev.to/earl_wood_475ab0c51fe29a5/miracle-neck-fan-portable-usb-rechargeable-5-speed-adjustable-for-outdoor-adventures-with-digital-display-1olp
hat, golf
Heat Relief in Your Pocket: Discover the Best Portable Neck Fans Introduction: Picture this: it's a scorching summer day, and you're out enjoying your favorite outdoor activities—whether it's hiking, sightseeing, or simply lounging at the beach. The sun beats down relentlessly, and sweat starts to trickle uncomfortably down your back. We've all been there, wishing for a quick and effective way to beat the heat without lugging around bulky cooling devices. Enter portable neck fans—the innovative solution that promises to keep you cool and comfortable wherever you go. Today, let's dive into the world of portable neck fans and discover how they can provide instant heat relief right in your pocket. Hook: Ever found yourself desperately seeking relief from the sweltering heat while on the move? Let's explore how portable neck fans can be your ultimate cooling companion, no matter the season or setting. The Problem: Whether you're commuting in crowded trains, exploring bustling city streets, or soaking up nature's beauty on a hike, staying cool can be a constant challenge. Traditional fans are often impractical, and carrying a handheld fan everywhere isn't always feasible. Plus, relying on shade or occasional breezes isn't reliable when the sun is relentless. Objection Handling: You might wonder, "Do I really need a neck fan when I already have other cooling options?" Absolutely! Portable neck fans are designed for convenience and efficiency. Unlike handheld fans, they keep your hands free, making them perfect for multitasking or enjoying activities without interruption. Their compact size means you can slip them into your pocket or bag effortlessly, ensuring you're always prepared for unexpected heat waves. Open Loops: But how do you choose the right portable neck fan for your needs? What features should you prioritize for optimal cooling on the go? Stick with me as we explore the top-rated options and essential features that make these fans a must-have for anyone seeking relief from summer heat. How to Solve the Problem: Let's break it down. Portable neck fans come in various designs, from bladeless models to those with adjustable speeds and rechargeable batteries. Look for fans with lightweight construction and ergonomic designs for comfort during extended wear. Consider features like hands-free operation, quiet operation, and long battery life to ensure your fan enhances rather than disrupts your outdoor experience. The Solution: When choosing the best portable neck fan, prioritize factors like airflow efficiency, ease of use, and durability. Opt for models with adjustable settings to customize cooling levels according to your preference. With a portable neck fan in your pocket, you'll have the ultimate tool to stay cool and comfortable, whether you're exploring new places or simply relaxing outdoors. Conclusion: In conclusion, portable neck fans are revolutionizing how we beat the heat on the go. Compact, efficient, and designed for convenience, these fans ensure you can enjoy outdoor adventures without being at the mercy of rising temperatures. Invest in your comfort and discover the joy of staying cool with the best portable neck fans tailored to your lifestyle. Call to Action: Ready to embrace portable comfort and beat the heat effortlessly? Explore our curated selection of the best portable neck fans today and experience the difference for yourself. Don't let heat waves hold you back—carry relief in your pocket wherever you go. https://temu.to/m/ulsyq22olol
earl_wood_475ab0c51fe29a5
1,913,051
improve 30% inference speed for Stable Diffusion pipelines:
I've been generating a lot of nail art images for my image site lately,Finally i'm using OneDiff get...
0
2024-07-05T17:36:54
https://dev.to/codesmart_1/improve-your-inference-speed-for-stable-diffusion-pipelines-2h4
> I've been generating a lot of [nail art images](https://nailarts.pro) for my image site lately,Finally i'm using [OneDiff](https://github.com/siliconflow/onediff) get 30% speed up and I've found a few things that can improve the speed of stable diffusion reasoning, as summarized below ## Config Here are some key ways to optimize inference speed for Stable Diffusion pipelines: ### 1. Use half-precision (FP16) instead of full precision (FP32) - Load the model with `torch_dtype=torch.float16` - This can provide up to 60% speedup with minimal quality loss ### 2. Enable TensorFloat-32 (TF32) on NVIDIA GPUs[1]: ```python import torch torch.backends.cuda.matmul.allow_tf32 = True ``` ###3. Use a distilled model[1]: - Smaller distilled models like "nota-ai/bk-sdm-small" can be 1.5-1.6x faster - They maintain comparable quality to full models ### 4. Enable memory-efficient attention implementations[1]: - Use xFormers or PyTorch 2.0's scaled dot product attention ### 5. Use CUDA graphs to reduce CPU overhead[3]: - Capture UNet, VAE and TextEncoder into CUDA graph format ### 6. Apply DeepSpeed-Inference optimizations[2][4]: - Can provide 1.7x speedup with minimal code changes - Fuses operations and uses optimized CUDA kernels ### 7. Use torch.inference_mode() or torch.no_grad()[4]: - Disables gradient computation for slight speedup ### 8. Consider specialized libraries like stable-fast[3]: - Provides CUDNN fusion, low precision ops, fused attention, etc. - Claims significant speedups over other methods ### 9. Reduce the number of inference steps if quality allows ### 10. Use a larger batch size if memory permits By combining multiple optimizations, you can potentially reduce inference time from over 5 seconds to around 2-3 seconds for a single 512x512 image generation on high-end GPUs[1][2][4]. The exact speedup will depend on your specific hardware and model configuration. Citations: [1] https://huggingface.co/docs/diffusers/en/optimization/fp16 [2] https://www.philschmid.de/stable-diffusion-deepspeed-inference [3] https://github.com/chengzeyi/stable-fast [4] https://blog.cerebrium.ai/how-to-speed-up-stable-diffusion-to-a-2-second-inference-time-500x-improvement-d561c79a8952?gi=94a7e93c17f1 [5] https://www.felixsanz.dev/articles/ultimate-guide-to-optimizing-stable-diffusion-xl ## Try Other Inference Runtime Yes, there are several compile backends that can improve inference speed for Stable Diffusion pipelines. Here are some key options: ### 1. torch.compile: - Available in PyTorch 2.0+ - Can provide significant speedups with minimal code changes - Example usage: ```python model = torch.compile(model, mode="reduce-overhead") ``` - Compilation takes some time initially but subsequent runs are faster[1] ### 2. Onediff: - Can provide 30% speedup with minimal code changes for diffusers - Easy to integrate with Hugging Face Diffusers[2] ### 3. DeepSpeed-Inference: - Can provide around 1.7x speedup with minimal code changes - Optimizes operations and uses custom CUDA kernels - Easy to integrate with Hugging Face Diffusers[2] ### 4. stable-fast: - Specialized optimization framework for Hugging Face Diffusers - Implements techniques like CUDNN convolution fusion, low precision ops, fused attention, etc. - Claims significant speedups over other methods - Provides fast compilation within seconds, much quicker than torch.compile or TensorRT[4] ### 5. TensorRT: - NVIDIA's deep learning inference optimizer and runtime - Can provide substantial speedups but requires more setup ### 6. ONNX Runtime: - Cross-platform inference acceleration - Supports various hardware accelerators When choosing a compile backend, consider factors like: - Ease of integration - Compilation time - Compatibility with your specific model and hardware - Performance gains for your particular use case For Stable Diffusion specifically, stable-fast seems promising as it's optimized for Diffusers and claims fast compilation times[4]. However, torch.compile is also a solid choice for its ease of use and good performance gains[1]. DeepSpeed-Inference is another strong contender, especially if you're already using the Hugging Face ecosystem[2]. Remember that the effectiveness of these optimizations can vary depending on your specific hardware, model, and inference settings. It's often worth benchmarking multiple options to find the best fit for your particular use case. Citations: [1] https://www.felixsanz.dev/articles/ultimate-guide-to-optimizing-stable-diffusion-xl [2] https://github.com/siliconflow/onediff/tree/main/onediff_diffusers_extensions/examples/sd3 [3] https://www.philschmid.de/stable-diffusion-deepspeed-inference [4] https://www.youtube.com/watch?v=AKBelBkPHYk [5] https://github.com/chengzeyi/stable-fast [6] https://www.reddit.com/r/StableDiffusion/comments/18lvwja/stablefast_v1_2x_speedup_for_svd_stable_video/
codesmart_1
1,913,050
Basic Of Javascript - Day 1 of #100DaysOfFullStackChallnege
So, Welcome back everyone 👋 I'm Aditya, and I'm starting a new series for the next 100 days to become...
0
2024-07-05T17:35:59
https://dev.to/zendeaditya/basic-of-javascript-day-1-of-100daysoffullstackchallneg-1a54
javascript, webdev, frontend, basic
So, Welcome back everyone 👋 I'm Aditya, and I'm starting a new series for the next 100 days to become an excellent Full-stack Developer. Today is Day 1 of my journey. Today I revised the basic yet important concept in javascript. Today I started with revising array, string, and object in javascript. so whatever I learned today I want to share with you let's start I started the revision with Objects in js. **## 1. Objects** So, an Object is nothing but a container that contains different types of values. Here is the syntax of Object. There are mainly two types of Objects syntax ``` let obj = new Object(); ``` this is the one way to create an object and the second and simple syntax is ``` let blogDetails = { author: "aditya", age: 21, blogsWriteOn : ["Dev.to", "Medium.com", "Hashnode.com"], location: "Pune" } ``` this way you can create an object. In the object, we store the data in the `key: value` pairs, we can store any type of data in an object. Accessing the data from an object. 1. Using (.) notation: ``` let authorLocation = blogDetails.location; // Pune ``` 2. Using ([]) notation: ``` let authorName = blogDetails['author']; // aditya ``` ## **2. Arrays** Arrays -> It is used to store multiple items using one variable name. In JavaScript, the array is resizable & can contain different types of data. Here is what resizable means, suppose the array is like this: ``` let fruits = ["apple", "orange", "banana", "pineapple"]; ``` There are a total 4 elements in the array and if you push one element like ``` fruits[5] = "grapes"; ``` This will be acceptable In js if we console this the output will remember that we skip the 4th index. but this still works. ``` [ 'apple', 'orange', 'banana', 'pineapple', <1 empty item>, 'grapes' ] ``` Array Methods: There are almost 46 array methods for the array. We are discussing some but the important methods of array here! To understand this method better way we are going to learn with example ``` let numbersArray = [1, 2, 3, 4, 5,6,7,8, 9,10, 11, 12, 13, 14, 15, 16, 17, 18]; ``` 1. Array.at() This method is used to access a particular element from the array with the help of its index. Keep in mind that the index of the array starts with `0` not with `1`. If we want to access the element at index `0`, we can write like ``` console.log(numbersArray.at(0)); // 1 ``` If we want to access the last element of the array we can take the help of length property to get the total length of the array. In this case `numbersArray.length` will output `18`. To access the last element ``` console.log(numbersArray[numbersArray.length-1]); // 18. ``` 2. Array.concat() This method is used to combine two arrays into a single array. Here is simple example to demonstrate this ``` let numbersArray = [1, 2, 3, 4, 5,6,7,8, 9,10, 11, 12, 13, 14, 15, 16, 17, 18]; let lettersArray = ["a", "b", "c", "d", "e", "f"]; ``` The output of the above code is ``` console.log(numbersArray.concat(lettersArray)); // [1, 2, 3, 4, 5, 'a', 'b', 'c', 'd', 'e', 'f'] ``` 3. Array.filter() This method is used to filter the array element on certain conditions that you write as a callback function. For example, if you want to print all the event numbers from the array so you can write ``` let evennumbers = numbersArray.filter((num)=> num % 2 === 0); ``` the output will be ``` [2, 4, 6, 8, 10, 12, 14, 16, 18] ``` another example with object ``` // Filter on an array of objects const students = [ { name: "aditya", age: 21, mark: 80 }, { name: "rator", age: 19, mark: 89 }, { name: "kevin", age: 22, mark: 38 }, { name: "atharva", age: 18, mark: 80 }, { name: "random", age: 25, mark: 80 }, ]; ``` and the output is ``` const eligible = students.filter((std)=> std.age>20 && std.mark>50); console.log(eligible); // [ { name: 'aditya', age: 21, mark: 80 }, { name: 'random', age: 25, mark: 80 }] ``` The filter function takes two parameters 1. callbackFun 2. thisArg **callbackFun** - is a simple function that you provide the perform certain tasks. **thisArg** - This is an optional parameter. A value to use as this when executing callbackFn. filter returns the `shallow copy` of the original array after performing the callback function. 4. Array.forEach() This method executes the callback function on each element of the array Here is example of forEach ``` let numbersArray = [1, 2, 3, 4, 5]; numbersArray.forEach((num , idx)=>{ console.log(`The number at index ${idx} is ${num}`); }) ``` and the output will be ``` The number at index 0 is 1 The number at index 1 is 2 The number at index 2 is 3 The number at index 3 is 4 The number at index 4 is 5 ``` 5. Array.map() This is the method of array creates a new array populated with the results of calling a provided function on every element in the calling array. the example of map is ``` let mapfun = numbersArray.map((num)=>num*2); ``` and the output is ``` [ 2, 4, 6, 8, 10 ] ```
zendeaditya
1,913,048
สล็อตวอเลท: สนุกไปกับการเล่นเกมที่ง่ายและปลอดภัย
สล็อตวอเลท: สนุกไปกับการเล่นเกมที่ง่ายและปลอดภัย เปิดประสบการณ์ใหม่กับสล็อตวอเลท!...
0
2024-07-05T17:35:14
https://dev.to/ric_moa_cbb48ce3749ff149c/sltwelth-snukaipkabkaarelnekmthiingaayaelapldphay-339k
สล็อตวอเลท: สนุกไปกับการเล่นเกมที่ง่ายและปลอดภัย เปิดประสบการณ์ใหม่กับ[สล็อตวอเลท](https://hhoc.org/)! ร่วมสนุกกับเกมที่น่าตื่นเต้นและหลากหลาย พร้อมรับประสบการณ์ที่ปลอดภัยและเชื่อถือได้ ไม่ต้องกังวลเรื่องการฝากถอน ด้วยระบบวอเลทที่สะดวกและรวดเร็ว นอกจากนี้ยังมีโปรโมชั่นและโบนัสมากมายรอคุณอยู่ อย่าพลาดโอกาสดีๆ ที่จะทำให้คุณได้รับทั้งความสนุกและเงินรางวัล!
ric_moa_cbb48ce3749ff149c
1,913,047
Path To Ministry | FuturFaith Ministry
FuturFaith is an innovative online platform in Ireland dedicated to training individuals to become...
0
2024-07-05T17:35:02
https://dev.to/futurfaithadmin/path-to-ministry-futurfaith-ministry-469n
learning, career
FuturFaith is an innovative online platform in Ireland dedicated to training individuals to become legally registered wedding officiants. The comprehensive course offered by FuturFaith delves into various essential areas such as the legalities of officiating weddings, ceremonial practises, and effective public speaking. The curriculum also includes branding, marketing strategies, and self-care techniques, providing a holistic approach to ministry training. The course is designed with flexibility in mind, allowing participants to learn at their own pace. Students can engage in practical assignments and shadow experienced officiants, gaining valuable hands-on experience. FuturFaith stands out for its inclusive approach, welcoming individuals from diverse backgrounds and beliefs, ensuring that everyone has access to quality training. A unique aspect of FuturFaith is its focus on building a supportive community. Graduates are not only equipped with the necessary skills to officiate weddings but also receive ongoing support and resources to help them thrive in their new roles. This includes networking opportunities, continuous professional development, and access to a wide range of tools and materials. FuturFaith also emphasises the importance of personal branding and marketing, guiding new officiants in establishing a distinctive presence in the wedding industry. This comprehensive training ensures that graduates are not only compliant with legal requirements but are also well-prepared to meet the expectations of modern couples. In summary, FuturFaith offers a thorough, flexible, and inclusive training programme for aspiring wedding officiants in Ireland, blending practical experience with professional development and community support. **For more information, visit [FuturFaith Ministry](https://www.futurfaith.com).**
futurfaithadmin