id
int64 5
1.93M
| title
stringlengths 0
128
| description
stringlengths 0
25.5k
| collection_id
int64 0
28.1k
| published_timestamp
timestamp[s] | canonical_url
stringlengths 14
581
| tag_list
stringlengths 0
120
| body_markdown
stringlengths 0
716k
| user_username
stringlengths 2
30
|
---|---|---|---|---|---|---|---|---|
1,913,015 | Computer Vision Meetup: Performance Optimisation for Multimodal LLMs | In this talk we’ll delve into Multi-Modal LLMs, exploring the fusion of language and vision in... | 0 | 2024-07-05T17:34:35 | https://dev.to/voxel51/computer-vision-meetup-performance-optimisation-for-multimodal-llms-k5l | ai, computervision, machinelearning, datascience | In this talk we’ll delve into Multi-Modal LLMs, exploring the fusion of language and vision in cutting-edge models. We'll highlight the challenges in handling diverse data heterogeneity, its architecture design, strategies for efficient training, and optimization techniques to enhance both performance and inference speed. Through case studies and future outlooks, we’ll illustrate the importance of these optimizations in advancing applications across various domains.
About the Speaker
[Neha Sharma](https://www.linkedin.com/in/hashux/) has a rich background in digital products and technology services, having delivered successful projects for industry giants like IBM and launching innovative products for tech startups. As a Product Manager at Ori, Neha specializes in developing cutting-edge AI solutions by actively engaging on various AI-based use cases centered around latest/popular LLMs, demonstrating her commitment to staying at the forefront of AI technology.
Not a Meetup member? Sign up to attend the next event:
https://voxel51.com/computer-vision-ai-meetups/
Recorded on July 3, 2024 at the AI, Machine Learning and Computer Vision Meetup.
| jguerrero-voxel51 |
1,912,663 | How I stopped my procrastination: Insights into developer mindset | You’ve been at your job for years, you know how to write code good enough to get you by comfortably,... | 0 | 2024-07-05T17:33:52 | https://dev.to/middleware/how-i-stopped-my-procrastination-insights-into-developer-mindset-23hl | productivity, development, beginners, programming | You’ve been at your job for years, you know how to write code good enough to get you by comfortably, yet every now and then there’s a ticket, a ticket that lingers on-and-on on that kanban board, red with delays, reschedules and “is this done?” comments. “I’ll pick it up first thing in the morning” is what you told yourself yesterday, and the day is already dusking off.
<img src="https://media4.giphy.com/media/5q3NyUvgt1w9unrLJ9/200w.gif?cid=6c09b952va9i6h9jue6xqe50vw4o7n9vbwl2cfvn3jbwxnc1&ep=v1_gifs_search&rid=200w.gif&ct=g"/>
We as programmers and developers are likely to procrastinate, and there are good enough reasons for it. And being lazy might be one of them, but the documentation of our brains is much bigger than that.
## Why programmers are likely to procrastinate
### Cognitive overload 😵💫
As problem-solvers, we tend to use the ‘most-contextual’ part of the brain way too often, i.e the **Frontal Cortex**.
The frontal cortex is responsible for executive functions, which involve analyzing information, identifying patterns, decision making, and most importantly, _doing-the-right-thing-even-if-it-is-the-hard-thing-to-do_ actions.
<img src="https://i.pinimg.com/originals/00/69/51/0069513d884baad8cee121e091fb51ad.gif"/>
For these seemingly-simple tasks, the frontal cortex needs **bucket-loads of energy**, exhibiting a very high metabolic rate.
We, as mere mortals with anti-bluelight glasses, have fixed reserves of energy, and when we expend them, compromises start to occur.
_Behave’s_ author Sapolsky point out that when the frontal cortex is overloaded, subjects become less prosocial, they lie more, are less charitable, and are more likely to cheat on their diets.
> “Willpower is more than just a metaphor; self-control is a finite resource”,
he says, stating that _doing-the-right-thing-even-if-it-is-the-hard-thing-to-do_ is not merely an emotional and moral choice, but is far deeply tied to the physiology of the brain.
### I’m just a kid
We have all had our fair share of beginner-developer moments. When we plowed through nights and days trying to learn and retain how that _x_ language or _y_ framework functions. And the number of times we almost gave up.
<img src="https://gifdb.com/images/high/what-a-noob-puppet-misyfy7rmepive8a.gif"/>
But after a certain threshold, coding didn’t seem to have that big of a draining effect. And there’s a good reason for it.
When we practice and learn something, we start shifting the cognitive processes to the more reflexive (stemming from a reflex action, an automatic “muscle memory”) parts of the brain, like the cerebellum.
And once that is achieved, we reduce the burden of computation on the Frontal cortex, making the tasks less energy-taxing.
### Decision Fatigue
Cognitive tasks are not limited to logic-oriented, calculative, cautious tasks that one does as a programmer.
It also includes a seemingly simple task: **Decision making**, like using the correct approach to write readable code, or something as simple as which task should I ship first?
And like cognitive loads, decision-fatigue is tied to your energy reserves too, and can ultimately affect your productivity.
## Process rather than Procrastinate
Now that we have broken down Procrastination, it's time to actually tackle it.
### 1. Germinate
To achieve anything, whether it’s your jira-ticket’s completion, or a side project, one must start to build. _“Am I taking the most efficient, readable, perfectly orchestrated approach to solve this problem”_; the friction of uncertainty and self doubt usually make it hard enough to begin with.
Opening your laptop is the first victory to overcome the code-block. And the only way to do that is to have something certain, some anchor that doesn’t give you any unease.
<img src="https://media4.giphy.com/media/ZXWWsIMwaHjiHMtNf4/200w.gif?cid=6c09b952ibhnhiqzxzxah6ue2m3owb9grbhswhlclie2nk62&ep=v1_gifs_search&rid=200w.gif&ct=g"/>
I’ve had times where I tried to find a perfect time window to start programming on a weekend, that was me trying to find a unicorn.
So I made it a point that at a certain hour in the morning, I will open Monkey Type and try to hit the day’s first 80+ words a minute.
It’s something I stole from Atomic Habits, and it helped immensely. This simple ritual in the first 5 minutes of my day consisted of adrenaline-rushed, muscle-memory reflexes that had nothing to do with uncertainty of my task.
Thus, I could be eased into opening my IDE to complete that Jira ticket, whose dread had haunted my jira board for a week.
### 2. Granular-ize
Something I learnt fairly recently is to plan in a way that your programming journey will have little to no decision making once you start with writing code.
Everything else can, and should be pre-planned, into its utmost smallest task-granules, making things as clear as they possibly can be.
<img src="https://media2.giphy.com/media/Ig9dsuczC9dDkOKrIa/200w.gif?cid=6c09b952ay2xn8tymf29sg01ctdyj2c08tiwrwybh1uymklh&ep=v1_gifs_search&rid=200w.gif&ct=g"/>
The concept of **Engineering Requirements Document** (ERDs) was something I looked down upon as a corporate scut-work for the upper management to feel included in the process.
Turns out I was wrong, and it’s intended to make sure that all the decision fatigue occurs during initial stages, and the rest of the shipment of tickets spread throughout the week remains relatively unharmed from taxing your Frontal cortex’s energy reserves.
### 3. Game-ify
Updating my Jira tickets on my kanban board didn’t give me enough belongingness to the fruits of my labor. Neither was I able to track my personal progress of what I completed, what I shipped and what I spilled.
<img src="https://www.jenlivinglife.com/wp-content/uploads/2019/01/gyi7eys35a32b61551d35012060362_499x280.gif"/>
So I kept a set of my mini daily tasks on a local Microsoft’s To Do list. The satisfying chime of completing the task gave me a big boost on being on track, without having to worry about what is left undone and what is yet to be picked up.
Since this was personal, I could add tasks like “get a brownie on 3 completions” and wait for it to ping up when I marked it as done.
Motivation is a cocktail of neurotransmitters, make sure you zap your brains with some hits, every now and then.
Another tool that can help in the same is Middleware’s Project Overview. It systematically gave the reason for my spills and overcommitment in a sprint, and can help give an overview of your Jira Boards and Projects.
### 4. Gauge, Grapple and Give-up (not totally)
Friction in the building process goes far beyond decision-fatigues. You fail at something for long enough and you are likely to not want to do that thing anymore.
<img src="https://64.media.tumblr.com/tumblr_lifd0k25wJ1qaakxao1_500.gif" />
It's wise to gauge the problem, allocate a time-duration, and wrestle with the problem with all your might. But when you know that it’s time-up, ask for help.
You need to make a voluntary decision to make the problem smaller than your hands, or add more hands to it.
There’s nothing wrong with an extra set of limbs, or brains. Homo erectus showed the first signs of ordered-socialization within communities, and then came homo sapiens with their big-brains of computation. Companionship precedes cognition, at least within evolution.
## 5. Conclusion
We as developers often forget how human we are, I mean we are running services that cater to millions of humans.
>_Who are we, but gods on keyboards_?
But, something as little as a missed breakfast can affect our productivity, and something as big as a system-failure can drive us to work harder. We are complex machines with simple values.
As a developer who would rather write a script for an hour, rather than clicking 4 buttons in 4 seconds, I can vouch, it’s always wise to fix a process rather than the absolute problem. Fix habitual actions, and you fix procrastination.
| eforeshaan |
1,913,046 | เว็บตรง100: ความสนุกแบบไม่ต้องผ่านเอเย่นต์! | เว็บตรง100: ความสนุกแบบไม่ต้องผ่านเอเย่นต์! พบกับประสบการณ์การเล่นเกมออนไลน์ที่ดีที่สุดกับ... | 0 | 2024-07-05T17:33:09 | https://dev.to/ric_moa_cbb48ce3749ff149c/ewbtrng100-khwaamsnukaebbaimtngphaaneeynt-p92 | เว็บตรง100: ความสนุกแบบไม่ต้องผ่านเอเย่นต์!
พบกับประสบการณ์การเล่นเกมออนไลน์ที่ดีที่สุดกับ [เว็บตรง100](https://hhoc.org/) ที่ไม่ต้องผ่านเอเย่นต์! รับประกันความปลอดภัยและความโปร่งใส พร้อมโปรโมชั่นและโบนัสสุดคุ้มที่จะทำให้คุณตื่นเต้นทุกวัน สมัครง่าย ฝาก-ถอนรวดเร็วตลอด 24 ชั่วโมง อย่าพลาด! มาเป็นส่วนหนึ่งของเราวันนี้เพื่อความสนุกที่ไม่มีที่สิ้นสุด! | ric_moa_cbb48ce3749ff149c |
|
1,911,847 | Finding Your True North: A Concise Guide to Authenticity | Authenticity. It's a word thrown around a lot these days, but what does it truly mean? At its core,... | 27,967 | 2024-07-05T17:31:02 | https://dev.to/rishiabee/finding-your-true-north-a-concise-guide-to-authenticity-5cbp | authenticity, leadership, management | Authenticity. It's a word thrown around a lot these days, but what does it truly mean? At its core, authenticity is about being genuine, acting with integrity, and living according to your inner compass. It's the unwavering sense of "you" that shines through in your actions, decisions, and even your style.
---
> "**The only person you are destined to become is the person you decide to be.**" - Ralph Waldo Emerson _(This quote emphasizes the power of self-discovery and living according to your own choices.)_
> "**The greatest danger for most of us is not that our aim is too high and we miss it, but that it is too low and we reach it.**" - Michelangelo _(This quote reminds us that true fulfillment comes from striving to be our authentic selves, even if it's challenging.)_
---
So, how can you cultivate your own authentic self? Here's your roadmap:
- **Craft a Personal Manifesto:**
Take time to reflect on your values, your passions, and your non-negotiables. Write them down! This personal manifesto will serve as your guiding light.
- **Embrace Vulnerability:**
Let people see the real you, flaws and all. Authenticity thrives on connection, and connection requires vulnerability. Share your passions, your struggles, and your true opinions.
- **Consistency is Key:**
While your style or hobbies may evolve, your core values should remain constant. This consistency builds trust and allows others to understand who you are at your core.
- **Set Boundaries:**
Respecting your limitations and saying "no" is crucial for authenticity. Don't spread yourself thin trying to please everyone. Prioritize what aligns with your values and protects your well-being.
- **Know When to Stop:**
Don't force a certain image or behavior. Authenticity isn't about maintaining a facade. It's about allowing your true self to shine through, even when it's messy or inconvenient.
- **Transparency Matters:**
Be honest and upfront in your communication. Authenticity doesn't mean blurting out every thought, but it does mean striving for genuineness.
- **Embrace Your Range:**
You can be both a nature enthusiast and a techie, a lover of classical music and a dab hand at baking. Embrace the multifaceted nature of your personality!
Remember, authenticity is a journey, not a destination. There will be stumbles and moments of doubt. But by staying true to your core values and letting your genuine self shine through, you'll find the path to your own unique and powerful form of authenticity.
---
> "**To thine own self be true.**" - William Shakespeare _(A timeless quote reminding us to stay faithful to our authentic selves.)_
---
## The Stories Within
Consider Sarah, a graphic designer with a whimsical, artistic streak. Her colleagues know her for her vibrant clothing choices and her infectious laugh. But when it comes to client work, Sarah is a meticulous professional, delivering polished designs that prioritize functionality. This is authenticity in action. Sarah embraces her full self, both the playful and the serious, without compromising her core values of creativity and excellence.
Now, there's Michael, a history teacher known for his dry wit and passion for obscure historical events. At home, however, Michael is a devoted family man, indulging in silly movie nights with his kids. This apparent contradiction isn't inauthenticity; it's simply the multifaceted nature of a person. Michael's dedication to his students and his family stems from the same core value: a love for learning and connection.
---
> "**Authenticity is a collection of choices that reflect your inner values.**" - Frederic Buechner _(This quote highlights that authenticity is an active process based on your core beliefs.)_
---
## Be You, Unapologetically: Bullet points to Authenticity
**Know Your Values:** Craft a personal manifesto that reflects your core principles and what matters most to you.
**Embrace Vulnerability:** Let people see the real you, strengths and weaknesses. Build connections through open communication.
**Live with Consistency:** While your style may change, your core values should remain constant. This builds trust and allows others to understand who you are.
**Set Boundaries:** Respect your limits and say "no" when needed. Focus your energy on what aligns with your values and protects your well-being.
**Be Transparent:** Strive for honesty and genuineness in your communication.
**Embrace Your Multifaceted Self:** You can have a range of interests and passions. Authenticity celebrates the unique blend that makes you, you.
**The Journey Matters:** Authenticity is a continuous process. There will be challenges, but staying true to yourself is the path to a fulfilling life.
---
> "**Authenticity is the new rebellion.**" - Margaret Mead _(This quote suggests that living authentically can be a form of nonconformity in a world that often pressures us to fit in.)_
--- | rishiabee |
1,912,818 | BLOG ON HOW TO HOST STATIC WEBSITE ON AZURE BLOB STORAGE | You will need to download Visual Studio Code. It is a code editor. It is static because it is fixed... | 0 | 2024-07-05T17:30:52 | https://dev.to/free2soar007/blog-on-how-to-host-static-website-on-azure-blob-storage-4jo0 | You will need to download Visual Studio Code. It is a code editor.
It is static because it is fixed i.e.one cannot interact with it.
STEPS
a. Download a sample website and save.
b. Open Visual Studio Code
i. To open Visual Studio Code; search for Visual Studio Code in the search bar at the bottom left, beside the windows icon
ii. Click on file at the top left. See below screenshot
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0144iaw6p4z8hxjqai1.png)
iii. Select open folder from the dropdown to open the folder.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x4zqek20p1gkcabbqfc.PNG)
iv. Go to the download folder to select the folder you want to work on.
Click on it and select Folder
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4anjxquivs6v28b5p6b.PNG)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lu6id9ka3hmuc3zi1hed.PNG)
v. Click on index.html on visualstudio code to open it
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqq54zycowxqr4skdd5q.PNG)
vi. Then, you edit the portions written in white. They are between the tags. Extra care has to be taken so as not to delete or tamper with the codes.
The portions written in white are the things rendering on the page i.e. the things we can see on the page when it is opened on the browser.
After editing, you can now save.
2. Go to Azure Portal
i. Login to Azure
ii. Create a Storage Account. Search for Storage Accounts in search bar, and select it
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c57wr0nvhmog6kbxf67p.png)
iii. Click on +Create on the top left of the page
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjrlua84o4tnyg5jy6tx.png)
iv. Edit the Basic page, leave the others at default. Then, review and create.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvf0l90kwsbffikgxvgo.PNG)
After successfully creating the storage account, click on go to Resource.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wnhldvn6p35h5xq7nai.PNG)
v. Locate the static website.
The static website can be located by either Clicking on 'capabilities' or by clicking 'Data Management' on left pane as shown below
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/disls5ytpkndpdsonm42.png)
Data Management
Click on Data Management, then, click on 'Static Website'
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spoo63pkt922j36urv2n.png)
vi. After clicking static website; you enable it. Then, you input the index document name and the error document path. Then, you click save.
The index majorly is the root folder; from the index, you can link to other folders.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujgjogmonp3c9hcjbkj0.PNG)
vii. After saving, an Azure Storage account will be created, as well as the 'Primary Endpoint link', and 'Secondary Endpoint link'. These links when opened on the browser will take you to your website.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iorwxz6h46lbkli4vsjo.png)
viii. Go to Data Storage
Click on Containers. This will open the recently created storage account.
Click on the Azure storage account that was created
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/teq7y2adlubxdi7w4xqt.PNG)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vxdo6zjin9hx19awrh6u.png)
ix. Click on the Azure storage account, click 'upload' select the file to be uploaded from the folder. Drag and drop on the Azure storage account. Then, click on upload.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hg4yu5zdu4hsmjink3l.PNG)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cz9hswjmj5s2ra4kqkq.PNG)
x. To access the website, go to the overview page.
Select Data Management, Click on Static website, and copy the 'Primary Endpoint', and paste it in your browser.
https://anthonywebsite.z19.web.core.windows.net/
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n85zqi325axl5dm5ccdt.png) | free2soar007 |
|
1,913,044 | The Future of Digital Marketing: Trends to Watch in 2024 | As the digital landscape continues to evolve, businesses must stay ahead of the curve to remain... | 0 | 2024-07-05T17:27:48 | https://dev.to/amandigitalswag/the-future-of-digital-marketing-trends-to-watch-in-2024-3lbi | amandigitalswag, digitalmarketing, bestdigitalmarketingcompany | As the digital landscape continues to evolve, businesses must stay ahead of the curve to remain competitive. Here are some of the key trends shaping the future of digital marketing in 2024:
1. AI and Machine Learning Integration
Artificial Intelligence (AI) and Machine Learning (ML) are no longer just buzzwords; they are integral parts of digital marketing strategies. AI-driven tools can analyze vast amounts of data to provide insights into customer behavior, predict trends, and automate tasks. Chatbots and virtual assistants powered by AI enhance customer service by providing instant responses and personalized experiences.
2. Personalization at Scale
Consumers expect brands to understand their preferences and deliver personalized experiences. Advanced data analytics and AI enable marketers to create highly targeted campaigns that resonate with individual users. Personalization extends beyond email marketing to include dynamic website content, personalized product recommendations, and tailored social media ads.
3. Voice Search Optimization
With the increasing use of smart speakers and voice-activated devices, optimizing for voice search is becoming crucial. Voice search queries are typically longer and more conversational, requiring a shift in SEO strategies. Marketers need to focus on natural language processing and optimizing content for voice-specific keywords.
4. Video Marketing Dominance
Video content continues to dominate digital marketing, with platforms like YouTube, TikTok, and Instagram Reels leading the charge. Live streaming, interactive videos, and short-form content are particularly effective in capturing audience attention. Marketers should invest in high-quality video production and leverage storytelling to engage viewers.
5. Sustainability and Social Responsibility
Consumers are increasingly concerned about the ethical and environmental impact of their purchases. Brands that demonstrate a commitment to sustainability and social responsibility can build stronger connections with their audience. Transparency in supply chains, eco-friendly practices, and social advocacy are becoming important aspects of brand identity.
6. Augmented Reality (AR) and Virtual Reality (VR)
AR and VR technologies are transforming the way consumers interact with brands. From virtual try-ons to immersive brand experiences, these technologies offer innovative ways to engage customers. As AR and VR become more accessible, marketers should explore how these tools can enhance their campaigns.
7. Privacy and Data Security
With increasing concerns about data privacy, marketers must prioritize transparency and compliance with regulations such as GDPR and CCPA. Building trust with consumers through clear data policies and secure data handling practices is essential. Brands that respect user privacy will stand out in a crowded market.
8. Influencer Marketing Evolution
Influencer marketing continues to be a powerful strategy, but its landscape is evolving. Micro-influencers and nano-influencers are gaining traction due to their highly engaged and loyal followings. Authenticity and genuine connections are key, as consumers are becoming more discerning about influencer endorsements.
9. Omnichannel Marketing
An omnichannel approach ensures a seamless customer experience across multiple touchpoints, including online and offline channels. Integrated marketing strategies that provide consistent messaging and personalized interactions can drive higher engagement and conversions. Brands should focus on creating cohesive journeys that cater to the preferences of their target audience.
10. Blockchain for Digital Advertising
Blockchain technology offers potential solutions to some of the challenges faced in digital advertising, such as ad fraud and transparency issues. By providing a decentralized and secure way to verify transactions, blockchain can enhance the credibility and effectiveness of digital ad campaigns.
Conclusion
The [digital marketing](https://amandigitalswag.in/shantipuram/
) landscape is rapidly changing, driven by technological advancements and shifting consumer behaviors. Staying informed about these trends and adapting strategies accordingly will be crucial for businesses aiming to thrive in 2024 and beyond. By embracing innovation and focusing on customer-centric approaches, marketers can navigate the complexities of the digital world and achieve long-term success.
[AmanDigitalSwag](https://amandigitalswag.in/shantipuram/
) offers the [Best Digital Marketing](https://amandigitalswag.in/shantipuram/
). Save 30% with the [Top Online Marketing Agency](https://amandigitalswag.in/shantipuram/
)! Experience rapid growth and outstanding service. For more information, visit us at :
https://amandigitalswag.in/shantipuram/
| amandigitalswag |
1,913,042 | Laravel logging | To help you learn more about what's happening within your application, Laravel provides robust... | 0 | 2024-07-05T17:20:30 | https://dev.to/developeralamin/laravel-logging-32jm | To help you learn more about what's happening within your application, Laravel provides robust logging services that allow you to log messages to files, the system error log, and even to Slack to notify your entire team.
In this folder there have the log file.
```
storage/logs
```
We can create laravel custom log file. Go to the `config\logging.php`
```
//By default have
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
//here is the custom log example:
'custom' => [
'driver' => 'single',
'path' => storage_path('logs/custom.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
```
If we want to show the every log data show our custom log file . Edit the `.env` file
replace
```
LOG_CHANNEL=stack to LOG_CHANNEL=custom
```
| developeralamin |
|
1,913,011 | Deadline Extended for the Wix Studio Challenge | Due to the fourth of July holiday in the U.S. and some confusion around submission guidelines, we... | 0 | 2024-07-05T17:17:08 | https://dev.to/devteam/deadline-extended-for-the-wix-studio-challenge-2c7f | wixstudiochallenge, webdev, devchallenge, javascript | Due to the fourth of July holiday in the U.S. and some confusion around submission guidelines, we have decided to extend the deadline for the Wix Studio Challenge and give participants an extra week to work on their projects.
**<mark>The NEW deadline for the Wix Studio Challenge is July 14.</mark> We will announce winners on July 16.**
As such, we'll take this opportunity to clarify some elements of the challenge below.
### All projects must be created with the **Wix Studio Editor**
The Wix Studio Editor and the standard Wix Editor are different platforms. To ensure you are working in Wix Studio, new users should take the following steps:
1. Navigate to the [Wix Studio page](https://www.wix.com/studio)
2. Click "Start Creating"
3. Create an account
4. Select "For a client, as a freelancer or an agency" and click continue
5. Click "Go to Wix Studio"
6. Click "As a freelancer"
7. Select "Web Development" and click continue
8. Click "Start Creating"
### Usage of Wix Apps and Design Templates
You may use any App on the Wix App Market including those made by Wix (ie. Wix Apps), but you must start your project from a blank template. Please do not use any pre-built visual design templates.
### Submission Template
The submission template should have pre-filled prompts for you to fill out. On the off chance that it does not, please ensure your submission includes the following:
- An overview of your project
- A link to your Wix Studio app
- Screenshots of your project
- Insight into how you leveraged Wix Studio’s JavaScript development capabilities
- List of APIs and libraries you utilized
- List of teammate usernames if you worked on a team
Here is the template again for anyone who was having trouble previously:
{% cta https://dev.to/new?prefill=---%0Atitle%3A%20%0Apublished%3A%20%0Atags%3A%20%20devchallenge%2C%20wixstudiochallenge%2C%20webdev%2C%20javascript%0A---%0A%0A*This%20is%20a%20submission%20for%20the%20%5BWix%20Studio%20Challenge%20%5D(https%3A%2F%2Fdev.to%2Fchallenges%2Fwix).*%0A%0A%0A%23%23%20What%20I%20Built%0A%3C!--%20Share%20an%20overview%20about%20your%20project.%20--%3E%0A%0A%23%23%20Demo%0A%3C!--%20Share%20a%20link%20to%20your%20Wix%20Studio%20app%20and%20include%20some%20screenshots%20here.%20--%3E%0A%0A%23%23%20Development%20Journey%0A%3C!--%20Tell%20us%20how%20you%20leveraged%20Wix%20Studio%E2%80%99s%20JavaScript%20development%20capabilities--%3E%0A%0A%3C!--%20Which%20APIs%20and%20Libraries%20did%20you%20utilize%3F%20--%3E%0A%0A%3C!--%20Team%20Submissions%3A%20Please%20pick%20one%20member%20to%20publish%20the%20submission%20and%20credit%20teammates%20by%20listing%20their%20DEV%20usernames%20directly%20in%20the%20body%20of%20the%20post.%20--%3E%0A%0A%0A%3C!--%20Don%27t%20forget%20to%20add%20a%20cover%20image%20(if%20you%20want).%20--%3E%0A%0A%0A%3C!--%20Thanks%20for%20participating!%20%E2%86%92 %}
Wix Studio Challenge Submission Template
{% endcta %}
### Challenge Overview
If you’re new to the challenge or couldn’t participate because of our previous deadline, consider this your lucky day!
**We challenge you to build an innovative eCommerce experience with Wix Studio.** [Ania Kubów](https://www.youtube.com/@AniaKubow) is our special guest judge, and we have $3,000 up for grabs for one talented winner!
You can find all challenge details in our original announcement post:
{% link https://dev.to/devteam/join-us-for-the-wix-studio-challenge-with-special-guest-judge-ania-kubow-3000-in-prizes-3ial %}
We hope this extra time gives all participants the opportunity to build something they are truly proud of! We can’t wait to see your submissions.
Happy Coding!
| thepracticaldev |
1,913,040 | Understanding Web Hosting: A Comprehensive Guide | Web hosting is a fundamental component of the internet, serving as the backbone for websites and... | 0 | 2024-07-05T17:12:30 | https://dev.to/leo_jack_4f2533dc77f81d90/understanding-web-hosting-a-comprehensive-guide-2h9d | webtesting, webhostingcompany | Web hosting is a fundamental component of the internet, serving as the backbone for websites and online applications. For anyone looking to establish an online presence, understanding web hosting is crucial. This guide will delve into what web hosting is, the types of web hosting available, and how to choose the right web hosting service for your needs.
**What is Web Hosting?**
**[Web hosting](https://www.riteanswers.com/)** refers to the service that allows individuals and organizations to make their websites accessible via the World Wide Web. Web hosting providers supply the technologies and services needed for a website or webpage to be viewed on the internet. Websites are hosted, or stored, on special computers called servers. When internet users want to view your website, all they need to do is type your website address or domain into their browser. Their computer will then connect to your server and your web pages will be delivered to them through the browser.
**Types of Web Hosting**
Shared Hosting: This is the most common and affordable type of web hosting. In shared hosting, multiple websites share a single server's resources, including bandwidth, storage, and processing power. It's ideal for small websites and blogs with low to moderate traffic.
**VPS Hosting**: Virtual Private Server (VPS) hosting provides a middle ground between shared hosting and dedicated hosting. A VPS hosting environment mimics a dedicated server within a shared hosting environment. It is suitable for websites that have grown beyond the limitations of shared hosting.
**Dedicated Hosting**: With dedicated hosting, you get an entire server to yourself. This type of hosting is best for websites with high traffic volumes or those requiring substantial resources. It offers the highest level of performance, security, and control.
**Cloud Hosting**: Cloud hosting uses a network of servers to ensure maximum uptime and scalability. This type of hosting is perfect for websites that experience fluctuating traffic levels, as it allows for resources to be allocated dynamically based on demand.
**Managed Hosting**: In managed hosting, the hosting provider takes care of all server-related issues, including maintenance, security, and updates. This option is excellent for those who do not have the technical expertise to manage a server or who prefer to focus on their core business activities.
Choosing the Right Web Hosting Service
When selecting a **[web hosting service](https://www.riteanswers.com/
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cztig6gcjkve47pznfj3.jpg))**, consider the following factors:
**Performance**: Look for hosting services that guarantee high uptime and fast load times.
**Security**: Ensure that the hosting provider offers robust security measures, including SSL certificates and regular backups.
**Support**: Opt for providers that offer 24/7 customer support to help resolve any issues promptly.
**Scalability**: Choose a hosting service that can grow with your website, offering options to upgrade resources as needed.
**Cost**: Compare the pricing plans of different providers to find one that offers good value for your specific requirements.
Understanding the various aspects of web hosting will enable you to make an informed decision that best suits your website's needs. Whether you're starting a personal blog or running a large e-commerce site, the right web hosting service is critical to your online success. | leo_jack_4f2533dc77f81d90 |
1,913,039 | 2058. Find the Minimum and Maximum Number of Nodes Between Critical Points | 2058. Find the Minimum and Maximum Number of Nodes Between Critical Points Medium A critical point... | 27,523 | 2024-07-05T17:09:40 | https://dev.to/mdarifulhaque/2058-find-the-minimum-and-maximum-number-of-nodes-between-critical-points-13c3 | php, leetcode, algorithms, programming | 2058\. Find the Minimum and Maximum Number of Nodes Between Critical Points
Medium
A **critical point** in a linked list is defined as **either** a **local maxima** or a **local minima**.
A node is a **local maxima** if the current node has a value **strictly greater** than the previous node and the next node.
A node is a **local minima** if the current node has a value **strictly smaller** than the previous node and the next node.
Note that a node can only be a local maxima/minima if there exists **both** a previous node and a next node.
Given a linked list `head`, return _an array of length 2 containing `[minDistance, maxDistance]` where `minDistance` is the **minimum distance** between **any two distinct** critical points and `maxDistance` is the **maximum distance** between **any two distinct** critical points. If there are **fewer** than two critical points, return `[-1, -1]`_.
**Example 1:**
![a1](https://assets.leetcode.com/uploads/2021/10/13/a1.png)
- **Input:** head = [3,1]
- **Output:** [-1,-1]
- **Explanation:** There are no critical points in [3,1].
**Example 2:**
![a2](https://assets.leetcode.com/uploads/2021/10/13/a2.png)
- **Input:** head = [5,3,1,2,5,1,2]
- **Output:** [1,3]
- **Explanation:** There are three critical points:
- [5,3,1,2,5,1,2]: The third node is a local minima because 1 is less than 3 and 2.
- [5,3,1,2,5,1,2]: The fifth node is a local maxima because 5 is greater than 2 and 1.
- [5,3,1,2,5,1,2]: The sixth node is a local minima because 1 is less than 5 and 2.
- The minimum distance is between the fifth and the sixth node. minDistance = 6 - 5 = 1.
- The maximum distance is between the third and the sixth node. maxDistance = 6 - 3 = 3.
**Example 3:**
![a5](https://assets.leetcode.com/uploads/2021/10/14/a5.png)
- **Input:** head = [1,3,2,2,3,2,2,2,7]
- **Output:** [3,3]
- **Explanation:** There are two critical points:
- [1,3,2,2,3,2,2,2,7]: The second node is a local maxima because 3 is greater than 1 and 2.
- [1,3,2,2,3,2,2,2,7]: The fifth node is a local maxima because 3 is greater than 2 and 2.
- Both the minimum and maximum distances are between the second and the fifth node.
- Thus, minDistance and maxDistance is 5 - 2 = 3.
- Note that the last node is not considered a local maxima because it does not have a next node.
**Constraints:**
- The number of nodes in the list is in the range <code>[2, 10<sup>5</sup>]</code>.
- <code>1 <= Node.val <= 10<sup>5</sup></code>
**Solution:**
```
/**
* Definition for a singly-linked list.
* class ListNode {
* public $val = 0;
* public $next = null;
* function __construct($val = 0, $next = null) {
* $this->val = $val;
* $this->next = $next;
* }
* }
*/
class Solution {
/**
* @param ListNode $head
* @return Integer[]
*/
function nodesBetweenCriticalPoints($head) {
$result = [-1, -1];
// Initialize minimum distance to the maximum possible value
$minDistance = PHP_INT_MAX;
// Pointers to track the previous node, current node, and indices
$previousNode = $head;
$currentNode = $head->next;
$currentIndex = 1;
$previousCriticalIndex = 0;
$firstCriticalIndex = 0;
while ($currentNode->next != null) {
// Check if the current node is a local maxima or minima
if (($currentNode->val < $previousNode->val &&
$currentNode->val < $currentNode->next->val) ||
($currentNode->val > $previousNode->val &&
$currentNode->val > $currentNode->next->val)) {
// If this is the first critical point found
if ($previousCriticalIndex == 0) {
$previousCriticalIndex = $currentIndex;
$firstCriticalIndex = $currentIndex;
} else {
// Calculate the minimum distance between critical points
$minDistance = min($minDistance, $currentIndex - $previousCriticalIndex);
$previousCriticalIndex = $currentIndex;
}
}
// Move to the next node and update indices
$currentIndex++;
$previousNode = $currentNode;
$currentNode = $currentNode->next;
}
// If at least two critical points were found
if ($minDistance != PHP_INT_MAX) {
$maxDistance = $previousCriticalIndex - $firstCriticalIndex;
$result = [$minDistance, $maxDistance];
}
return $result;
}
}
```
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,913,038 | Solving Robotic or Distorted Voice in Discord: A Comprehensive Guide with Free Phone Numbers for Discord | Have you ever been in the middle of an intense gaming session or an important chat on Discord, only... | 0 | 2024-07-05T17:09:19 | https://dev.to/legitsms/solving-robotic-or-distorted-voice-in-discord-a-comprehensive-guide-with-free-phone-numbers-for-discord-1f6e | webdev, beginners, javascript, programming | Have you ever been in the middle of an intense gaming session or an important chat on Discord, only to have your voice sound like a robot or become annoyingly distorted? You're not alone. This issue can be incredibly frustrating, but fortunately, there are several solutions you can try to resolve it. In this article, we'll dive into why this problem occurs and offer practical, easy-to-follow steps to fix it. Plus, we’ll cover some related tips, such as how to use free phone numbers for Discord and free phone numbers for Discord verification, which can come in handy for troubleshooting and enhancing your overall Discord experience.
Understanding the Issue: Why Does Your Voice Sound Robotic or Distorted on Discord?
Before we jump into the solutions, it’s essential to understand why this problem occurs. A robotic or distorted voice on Discord is typically due to one or more of the following reasons:
- Network Issues: Poor internet connection or high latency can disrupt the audio quality.
- Hardware Problems: Issues with your microphone or headset.
- Software Conflicts: Other applications or settings interfering with Discord.
- Server Problems: Issues with the Discord servers or the specific voice channel you’re using.
Fixing Network Issues
Check Your Internet Connection
Your internet connection is the backbone of a smooth Discord experience. Here's what you can do:
1. Speed Test: Run an internet speed test to ensure you have a stable and fast connection.
2. Wired Connection: If possible, use a wired connection instead of Wi-Fi to reduce latency.
3. Restart Router: Sometimes, a simple router restart can resolve temporary connectivity issues.
Optimize Discord’s Voice Settings
Discord has several settings that can be tweaked to improve voice quality:
1. Open Discord and go to User Settings (cog icon).
2. Select Voice & Video.
3. Under Input Sensitivity, enable Automatically determine input sensitivity.
4. Set the Input Mode to Voice Activity.
5. Adjust the Quality of Service (QoS) setting by disabling Enable Quality of Service High Packet Priority.
Addressing Hardware Problems
Microphone and Headset Check
Ensure your microphone and headset are functioning correctly:
1. Check Connections: Ensure all cables are securely connected.
2. Test with Other Apps: Use your microphone and headset with another application to see if the issue persists.
3. Update Drivers: Make sure your audio drivers are up to date. You can do this through Device Manager on Windows or via your hardware manufacturer’s website.
Reduce Background Noise
Background noise can cause distortion:
1. Use Noise-Cancelling Features: Many modern headsets come with noise-cancelling features. Enable these if available.
2. Discord Noise Suppression: Enable Krisp, Discord’s built-in noise suppression feature, by going to User Settings > Voice & Video > Advanced and toggling Noise Suppression.
Resolving Software Conflicts
Close Background Applications
Other applications running in the background can interfere with Discord:
1. Close Unnecessary Programs: Use Task Manager (Ctrl + Shift + Esc) to close any unnecessary applications.
2. Check for Conflicts: Ensure no other voice chat applications are running that might conflict with Discord.
Update Discord
Always use the latest version of Discord:
1. Check for Updates: Discord usually updates automatically, but you can check manually by restarting the app.
Adjust Quality of Service Settings
QoS settings can sometimes cause issues:
1. Disable QoS: Go to User Settings > Voice & Video > Quality of Service and disable Enable Quality of Service High Packet Priority.
Server Problems and Solutions
Switch Servers or Regions
Sometimes, the issue might be with the Discord server or region you are connected to:
1. Change Server Region: If you are an admin of the server, go to Server Settings > Overview > Server Region and change to a different region.
2. Join a Different Voice Channel: Sometimes, simply switching to another voice channel can resolve the issue.
Enhancing Your Discord Experience: Using Free Phone Numbers for Verification
Verification is an essential step for securing your Discord account and avoiding spammers. Here’s how to use free phone numbers effectively.
Why Use Free Phone Numbers for Discord Verification?
Using a temp number for Discord can be beneficial:
1. Privacy: Protect your number.
2. Convenience: Easily get past verification steps without using your real number.
How to Get a Temp Phone Number for Discord
1. Online Services: Use services like LegitSMS.com as the best and most reliable provider of free phone numbers for Discord.
2. Temporary Number Apps: Apps like Hushed or Burner provide temporary phone numbers.
3. Comprehensive Guide: Read this blog post about using temp phone numbers for Discord registration and verification for more detailed instructions.
Verification Process
1. Get a Number: Obtain a temp number for Discord from one of the services mentioned.
2. Enter Number in Discord: Go to User Settings > My Account > Phone and enter the temporary number.
3. Receive Code: Check the temporary number service for the verification code.
4. Enter Code: Enter the code in Discord to complete verification.
Conclusion
Dealing with a robotic or distorted voice on Discord can be annoying, but by following the steps outlined above, you can significantly improve your audio quality and overall Discord experience. Whether it’s optimizing your network settings, checking your hardware, resolving software conflicts, or even using a temp phone number for Discord verification, these solutions cover all the bases. Now, you can enjoy clear, uninterrupted conversations on Discord, just the way it should be.
FAQs
1. What causes a robotic voice on Discord?
- A robotic voice on Discord is usually caused by network issues, hardware problems, or software conflicts.
2. How can I fix my distorted voice on Discord?
- To fix a distorted voice, check your internet connection, update your audio drivers, adjust Discord’s voice settings, and ensure no background applications are interfering.
3. Can I use a temporary phone number for Discord verification?
- Yes, using a temporary phone number for Discord verification is a great way to protect your privacy and easily complete the verification process.
4. Why should I disable QoS on Discord?
- Disabling QoS can help if you are experiencing audio issues, as sometimes QoS settings can cause problems with certain network configurations.
5. What should I do if changing servers doesn't fix my voice issues on Discord?
- If changing servers doesn’t help, try restarting your router, closing background applications, updating Discord, or using a wired connection to improve stability.
For more information on free phone numbers and FAQs, visit LegitSMS FAQ. If you need further assistance with Discord, check out the official Discord website https://discord.com/here. | legitsms |
1,913,037 | Top Engineering College in Dharmapuri: Shreenivasa Engineering College | When it comes to pursuing engineering education in Dharmapuri, Shreenivasa Engineering College stands... | 0 | 2024-07-05T17:09:11 | https://dev.to/shreenivasa_123/top-engineering-college-in-dharmapuri-shreenivasa-engineering-college-289b | college, education, engineering | When it comes to pursuing engineering education in Dharmapuri, Shreenivasa Engineering College stands out as the [top engineering college in Dharmapuri](https://shreenivasa.info/). Our institution is renowned for its commitment to academic excellence, cutting-edge facilities, and a student-centric approach. With a diverse range of engineering programs, we cater to the evolving needs of the industry and ensure that our graduates are well-equipped to face future challenges.
At Shreenivasa Engineering College, we pride ourselves on our world-class faculty who are not only experts in their respective fields but also passionate about imparting knowledge. Our state-of-the-art laboratories and research centers provide students with hands-on experience, fostering innovation and practical skills. Additionally, our college boasts an impressive placement record, with numerous top-tier companies recruiting our graduates every year.
Our curriculum is designed to be both rigorous and flexible, allowing students to specialize in areas of interest while gaining a solid foundation in core engineering principles. We emphasize project-based learning, encouraging students to engage in real-world problem-solving and collaborative projects. This approach not only enhances technical skills but also develops critical thinking and teamwork abilities.
Shreenivasa Engineering College also offers a vibrant campus life, with a wide range of extracurricular activities and student organizations. From technical clubs to cultural events, there are numerous opportunities for students to explore their interests and develop leadership skills. Our supportive campus environment ensures that every student can thrive both academically and personally.
As the top engineering college in Dharmapuri, Shreenivasa Engineering College is dedicated to shaping the engineers of tomorrow. We continuously update our programs to keep pace with technological advancements and industry trends. Our strong industry connections and alumni network provide students with valuable mentorship and career opportunities.
Choosing Shreenivasa Engineering College means choosing a future filled with promise and potential. Join us to embark on an exciting journey of discovery and innovation. With our comprehensive education and supportive community, you will be well-prepared to make significant contributions to the engineering field.
Discover why Shreenivasa Engineering College is recognized as the top engineering college in Dharmapuri. Enroll today and become part of an institution that is committed to excellence and success.
| shreenivasa_123 |
1,913,036 | PYTHON THE BEAST! | Alright Python is an object-oriented programing language. It makes development of new applications... | 0 | 2024-07-05T17:08:54 | https://dev.to/daphneynep/python-the-beast-24mp | Alright Python is an object-oriented programing language. It makes development of new applications useful for high-level built-in data structures and dynamic typing. Also, it’s useful for scripting or “glue” code to combine existing components written in different languages. Ok as I am diving more into python, learning the modules and doing some of the labs given I am getting nervous because it’s not clicking to me yet! I mean most people says python is their favorite program language. Maybe It could be because it’s “simple and easy to learn syntax emphasizes readability and therefore reduces the cost and complication of long-term program maintenance” (Flatiron School, Intro to Python).
Why didn’t I know Python was so difficult? As much as it makes since it’s also confusing at the same time. Sounds crazy right? I just feel like there’s a lot of layers that just keeps coming one after the other as you finish one part, it’s like a mystery. It’s crazy but I really feel challenged to continue learning and having a more understand on how to code with Python. For example, below I am currently stuck on something like this:
**# Wrong:**
def foo(x):
if x >= 0:
return math.sqrt(x)
def bar(x):
if x < 0:
return
return math.sqrt(x)
**# Correct:**
def foo(x):
if x >= 0:
return math.sqrt(x)
else:
return None
def bar(x):
if x < 0:
return None
return math.sqrt(x)
When I try to follow the correct way to write my code it doesn’t seem to work. Although What I am working on is a bit different from these codes above but the idea is the same. I’m not sure what I am doing wrong yet but I’ll keep running it again and again until I get it right.
| daphneynep |
|
1,901,749 | 🌀Huracán: El proyecto educativo para ingenieros aprendiendo ZK | Aprender sobre ZK hoy no es una tarea fácil. Es una tecnología nueva, sin mucha documentación.... | 0 | 2024-07-05T17:06:48 | https://dev.to/turupawn/huracan-el-proyecto-educativo-para-ingenieros-aprendiendo-zk-2dl4 | ---
title: 🌀Huracán: El proyecto educativo para ingenieros aprendiendo ZK
published: true
description:
tags:
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-26 16:56 +0000
---
Aprender sobre ZK hoy no es una tarea fácil. Es una tecnología nueva, sin mucha documentación. Huracán nace de mi propia necesidad de aprender sobre ZK de una manera práctica, orientada a desarrolladores e ingenieros.
Huracán es un proyecto totalmente funcional, capaz de hacer transacciones privadas en Ethereum y blockchains EVM. Está basado en proyectos de privacidad actualmente en funcionamiento, pero con el código mínimo para facilitar el proceso de aprendizaje. Vamos a cubrir cómo esta tecnología puede adaptarse a nuevos casos de uso y futuras regulaciones. Además, al final del artículo comparto qué se necesita para llevar este proyecto de pruebas en testnet a uso real en producción.
Al terminar esta guía vas a poder ir a otros proyectos de la misma naturaleza entender cómo están construidos.
¿Prefieres ver el código completo? [Dirígete a Github](https://github.com/Turupawn/Huracan) para encontrar la totalidad del código en esta guíua.
_Cómo está construído Huracán:_
* Circuito hecho con Circom
* Hasheo con Poseidon
* Contratos de lógica de depósito y retiro en solidity
* Construcción de merkle trees en js y solidity, verificación en Circom
* Frontend con html y js vainilla
* web3.js para interacción con web3 y snarkjs para probar en el navegador (zk-WASM)
* Relayer con ethers.js 6 y express para preservar el anonimato de los usuarios
## Tabla de contenido
1. [Cómo funciona Huracán](#1-cómo-funciona-huracán)
1. [El circuito](#2-el-circuito)
1. [Los contratos](#3-los-contratos)
1. [El frontend](#4-el-frontend)
1. [El relayer](#5-el-relayer)
1. [¿Cómo llevar Huracán a producción?](#6-cómo-llevar-huracán-a-producción)
1. [Ideas para profundizar](#7-ideas-para-profundizar)
## 1. Cómo funciona Huracán
Huracán es una herramienta DeFi que protege la identidad de sus usuarios utilizando la técnica conocida como _pruebas de inclusión anónima_ para realizar lo que comúnmente llamamos _mixer_. Este sistema es capaz de demostrar que un usuario ha depositado ether en un contrato sin demostrar cual de todos fué.
![Depositando en Huracán](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/if5yhd2ef1vwr0bujxkz.png)
<p><small><em>Cada usuario que deposita Ether en Huracán es colocado como una hoja en un merkle tree dentro del contrato</em></small></p>
Para lograr esto, necesitamos un smart contract donde se depositarán los fondos y al hacerlo generará un merkle tree donde cada hoja representa un depositante. Adicionalmente, ocuparemos un circuito que generará las pruebas de inclusión que mantendrán al usuario anónimo al momento de retirar los fondos. Y también un relayer que ejecutará la transacción en nombre del usuario anónimo para proteger su privacidad.
![Retiro de Huracán](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igrb94o5cidddo3c8s5z.png)
<p><small><em>Los usuarios pueden luego retirar sus fondos demostrando que parte del merkle tree sin revelar cuál hoja les pertenece</em></small></p>
A continuación el código, la explicación breve y los materiales de apoyo necesarios para construir y lanzar tu propio proyecto con privacidad.
## 2. El circuito
_Material de apoyo: [Smart Contracts privados con Solidity y Circom](https://dev.to/turupawn/smart-contracts-privados-con-solidity-y-circom-3a8h)_
El circuito se encarga si eres parte del merkle tree, es decir, que eres uno de los depositantes sin revelar cuál eres pues mantienes los parámetros privados pero generas una prueba de inclusión que es capaz de ser verificada por un smart contract. ¿Cuáles parámetros privados? Durante el depósito, hasheamos una llave privada y un nulificador para crear así una nueva hoja en el árbol. La llave privada es un parámetro privado que nos servirá para luego demostrar que somos nosotros los dueños de esa hora. El nulificador es otro parámetros cuyo hash será pasado al contrato de solidity al momento de redimir los fondos, esto para prevenir que un usuario retire fondos 2 veces seguidas (double spend). El resto de parámetros privados son la ayuda que ocupa el circuito para reconstruir el árbol y revisar que somos parte de él.
Iniciamos instalando la librería `circomlib` que contiene los circuitos de poseidon que estaremos utilizando en este tutorial.
```bash
git clone https://github.com/iden3/circomlib.git
```
Ahora creamos nuestro circuito `proveWithdrawal` que prueba que hemos depositado en el contrato sin revelar quién somos.
`proveWithdrawal.circom`
```js
pragma circom 2.0.0;
include "circomlib/circuits/poseidon.circom";
template switchPosition() {
signal input in[2];
signal input s;
signal output out[2];
s * (1 - s) === 0;
out[0] <== (in[1] - in[0])*s + in[0];
out[1] <== (in[0] - in[1])*s + in[1];
}
template commitmentHasher() {
signal input privateKey;
signal input nullifier;
signal output commitment;
signal output nullifierHash;
component commitmentHashComponent;
commitmentHashComponent = Poseidon(2);
commitmentHashComponent.inputs[0] <== privateKey;
commitmentHashComponent.inputs[1] <== nullifier;
commitment <== commitmentHashComponent.out;
component nullifierHashComponent;
nullifierHashComponent = Poseidon(1);
nullifierHashComponent.inputs[0] <== nullifier;
nullifierHash <== nullifierHashComponent.out;
}
template proveWithdrawal(levels) {
signal input root;
signal input recipient;
signal input privateKey;
signal input nullifier;
signal input pathElements[levels];
signal input pathIndices[levels];
signal output nullifierHash;
signal leaf;
component commitmentHasherComponent;
commitmentHasherComponent = commitmentHasher();
commitmentHasherComponent.privateKey <== privateKey;
commitmentHasherComponent.nullifier <== nullifier;
leaf <== commitmentHasherComponent.commitment;
nullifierHash <== commitmentHasherComponent.nullifierHash;
component selectors[levels];
component hashers[levels];
signal computedPath[levels];
for (var i = 0; i < levels; i++) {
selectors[i] = switchPosition();
selectors[i].in[0] <== i == 0 ? leaf : computedPath[i - 1];
selectors[i].in[1] <== pathElements[i];
selectors[i].s <== pathIndices[i];
hashers[i] = Poseidon(2);
hashers[i].inputs[0] <== selectors[i].out[0];
hashers[i].inputs[1] <== selectors[i].out[1];
computedPath[i] <== hashers[i].out;
}
root === computedPath[levels - 1];
}
component main {public [root, recipient]} = proveWithdrawal(2);
```
Para compilar el circuito necesitamos tener instalado ambos circom y snarkjs. Si no lo tienes instalado usa la guía de instalación de Circom.
{% collapsible Guía de instalación de Circom %}
Coloca estos comandos para instalar circom y snarkjs.
```bash
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
git clone https://github.com/iden3/circom.git
cd circom
cargo build --release
cargo install --path circom
npm install -g snarkjs
```
{% endcollapsible %}
Ejecuta los comandos para generar la trusted setup y generar los archivos artefactos que estaremos usando más adelante en el frontend.
```bash
circom proveWithdrawal.circom --r1cs --wasm --sym
snarkjs powersoftau new bn128 12 pot12_0000.ptau -v
snarkjs powersoftau contribute pot12_0000.ptau pot12_0001.ptau --name="First contribution" -v
snarkjs powersoftau prepare phase2 pot12_0001.ptau pot12_final.ptau -v
snarkjs groth16 setup proveWithdrawal.r1cs pot12_final.ptau proveWithdrawal_0000.zkey
snarkjs zkey contribute proveWithdrawal_0000.zkey proveWithdrawal_0001.zkey --name="1st Contributor Name" -v
snarkjs zkey export verificationkey proveWithdrawal_0001.zkey verification_key.json
```
Ahora podemos generar el contrato verificador en `verfier.sol`.
```bash
snarkjs zkey export solidityverifier proveWithdrawal_0001.zkey verifier.sol
```
## 3. Los contratos
_Material de apoyo: [Solidity en 15 minutos](https://dev.to/turupawn/como-lanzar-un-dex-paso-a-pasox-2cnm)_
Los contratos son la garantía transparente que todo ha funcionado de manera correcta. Nos permite llevar un conteo de cuánto se ha depositado y también verifican que las pruebas sean válidas para así liberar los fondos. Es importante que todo lo que ocurre en los smart contracts es público, es la parte de nuestro sistema que no es anónima.
Haremos uso de tres contratos. El primero es el contrato verificador que recién generamos en el archivo `verifier.sol`, lánzalo ahora.
El segundo contrato es el de poseidon, si estás en Scroll Sepolia simplemente usa el que yo ya lancé en `0x52f28FEC91a076aCc395A8c730dCa6440B6D9519`. Si quieres usar otra blockchain descolapsa y sigue los pasos:
{% collapsible Lanza el contrato de poseidon %}
La versión de poseidon que usamos en nuestro circuito y contrato deben ser exactamente compatibles. Por lo tanto usamos la versión en `circomlibjs` tal y como lo muestro, solo asegúrate de colocar tu llave privada y url RPC en `TULLAVEPRIVADA` y `TUURLRPC`.
```bash
git clone https://github.com/iden3/circomlibjs.git
node --input-type=module --eval "import { writeFileSync } from 'fs'; import('./circomlibjs/src/poseidon_gencontract.js').then(({ createCode }) => { const output = createCode(2); writeFileSync('poseidonBytecode', output); })"
cast send --rpc-url TUURLRPC --private-key TULLAVEPRIVADA --create $(cat bytecode)
```
{% endcollapsible %}
Ahora lanza el contrato `Huracan` pasando como parámetro la dirección del contrato verificador y de poseidon.
```js
// SPDX-License-Identifier: MIT
pragma solidity >=0.7.0 <0.9.0;
interface IPoseidon {
function poseidon(uint[2] memory inputs) external returns(uint[1] memory output);
}
interface ICircomVerifier {
function verifyProof(uint[2] calldata _pA, uint[2][2] calldata _pB, uint[2] calldata _pC, uint[3] calldata _pubSignals) external view returns (bool);
}
contract Huracan {
ICircomVerifier circomVerifier;
uint nextIndex;
uint public constant LEVELS = 2;
uint public constant MAX_SIZE = 4;
uint public NOTE_VALUE = 0.001 ether;
uint[] public filledSubtrees = new uint[](LEVELS);
uint[] public emptySubtrees = new uint[](LEVELS);
address POSEIDON_ADDRESS;
uint public root;
mapping(uint => uint) public commitments;
mapping(uint => bool) public nullifiers;
event Deposit(uint index, uint commitment);
constructor(address poseidonAddress, address circomVeriferAddress) {
POSEIDON_ADDRESS = poseidonAddress;
circomVerifier = ICircomVerifier(circomVeriferAddress);
for (uint32 i = 1; i < LEVELS; i++) {
emptySubtrees[i] = IPoseidon(POSEIDON_ADDRESS).poseidon([
emptySubtrees[i-1],
0
])[0];
}
}
function deposit(uint commitment) public payable {
require(msg.value == NOTE_VALUE, "Invalid value sent");
require(nextIndex != MAX_SIZE, "Merkle tree is full. No more leaves can be added");
uint currentIndex = nextIndex;
uint currentLevelHash = commitment;
uint left;
uint right;
for (uint32 i = 0; i < LEVELS; i++) {
if (currentIndex % 2 == 0) {
left = currentLevelHash;
right = emptySubtrees[i];
filledSubtrees[i] = currentLevelHash;
} else {
left = filledSubtrees[i];
right = currentLevelHash;
}
currentLevelHash = IPoseidon(POSEIDON_ADDRESS).poseidon([left, right])[0];
currentIndex /= 2;
}
root = currentLevelHash;
emit Deposit(nextIndex, commitment);
commitments[nextIndex] = commitment;
nextIndex = nextIndex + 1;
}
function withdraw(uint[2] calldata _pA, uint[2][2] calldata _pB, uint[2] calldata _pC, uint[3] calldata _pubSignals) public {
circomVerifier.verifyProof(_pA, _pB, _pC, _pubSignals);
uint nullifierHash = _pubSignals[0];
uint rootPublicInput = _pubSignals[1];
address recipient = address(uint160(_pubSignals[2]));
require(root == rootPublicInput, "Invalid merke root");
require(!nullifiers[nullifierHash], "Vote already casted");
nullifiers[nullifierHash] = true;
(bool sent, bytes memory data) = recipient.call{value: NOTE_VALUE}("");
require(sent, "Failed to send Ether");
data;
}
}
```
## 4. El frontend
_Material de apoyo: [Interfaces con privacidad en Solidity y zk-WASM](https://dev.to/turupawn/interfaces-con-privacidad-en-solidity-y-zk-wasm-1amg)_
El frontend es la interfaz gráfica con la que estaremos interactuando. En esta demostración estaremos usando html y js vainilla para que los desarrolladores podamos adaptarla a cualquier frontend framework que estemos usando. Una característica muy importante del frontend es que debe ser capaz de producir las pruebas zk sin soltar información privada en el internet. Por eso es importante la tecnología zk-WASM que permite de una manera eficiente construir pruebas en nuestro navegador.
Ahora crea la siguiente estructura de archivos:
```
js/
blockchain_stuff.js
snarkjs.min.js
json_abi/
Huracan.json
Poseidon.json
zk_artifacts/
myCircuit_final.zkey
myCircuit.wasm
index.html
```
* `js/snarkjs.min.js`: descarga este archivo que contiene la librería de snark.js
* `json_abi/Huracan.json`: el ABI del contrato CircomCustomLogic que recién lanzamos, por ejemplo en Remix, lo puedes hacer dando clic en el botón "ABI" en la pestaña de compilación.
* `json_abi/Poseidon.json`: coloca [esto](https://gist.github.com/Turupawn/b89999eb8b00d7507908d6fbf6aa7f0b)
* `zk_artifacts`: coloca en esta carpeta los artefactos generados anteriormente. Nota: Cambia el nombre de myCircuit_0002.zkey por myCircuit_final.zkey
* `index.html`, `js/blockchain_stuff.js` y `js/zk_stuff.js` los detallo a continuación
El archivo de HTML contiene la interfaz necesaria para que los usuarios interactén con Huracán.
`index.html`
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
</head>
<body>
<input id="connect_button" type="button" value="Connect" onclick="connectWallet()" style="display: none"></input>
<p id="account_address" style="display: none"></p>
<p id="web3_message"></p>
<p id="contract_state"></p>
<input type="input" value="" id="depositPrivateKey" placeholder="private key"></input>
<input type="input" value="" id="depositNullifier" placeholder="nullifier"></input>
<input type="button" value="Deposit" onclick="_deposit()"></input>
<br>
<input type="input" value="" id="withdrawPrivateKey" placeholder="private key"></input>
<input type="input" value="" id="withdrawNullifier" placeholder="nullifier"></input>
<input type="input" value="" id="withdrawRecipient" placeholder="recipient"></input>
<input type="button" value="Withdraw" onclick="_withdraw()"></input>
<br>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/web3/1.3.5/web3.min.js"></script>
<script type="text/javascript" src="js/zk_stuff.js"></script>
<script type="text/javascript" src="js/blockchain_stuff.js"></script>
<script type="text/javascript" src="js/snarkjs.min.js"></script>
</body>
</html>
<script>
function _deposit()
{
depositPrivateKey = document.getElementById("depositPrivateKey").value
depositNullifier = document.getElementById("depositNullifier").value
deposit(depositPrivateKey, depositNullifier)
}
function _withdraw()
{
withdrawPrivateKey = document.getElementById("withdrawPrivateKey").value
withdrawNullifier = document.getElementById("withdrawNullifier").value
withdrawRecipient = document.getElementById("withdrawRecipient").value
withdraw(withdrawPrivateKey, withdrawNullifier, withdrawRecipient)
}
</script>
```
Ahora colocamos toda lo logica relacionada a web3, es decir la conexión a la wallet en el browser, lectura del estado y llamado de funciones.
`js/blockchain_stuff.js`
```js
const NETWORK_ID = 534351
const HURACAN_ADDRESS = "0x8BD32BDC921f5239c0f5d9eaf093B49A67C3b9d0"
const HURACAN_ABI_PATH = "./json_abi/Huracan.json"
const POSEIDON_ADDRESS = "0x52f28FEC91a076aCc395A8c730dCa6440B6D9519"
const POSEIDON_ABI_PATH = "./json_abi/Poseidon.json"
const RELAYER_URL = "http://localhost:8080"
var huracanContract
var poseidonContract
var accounts
var web3
let leaves
function metamaskReloadCallback() {
window.ethereum.on('accountsChanged', (accounts) => {
document.getElementById("web3_message").textContent="Se cambió el account, refrescando...";
window.location.reload()
})
window.ethereum.on('networkChanged', (accounts) => {
document.getElementById("web3_message").textContent="Se el network, refrescando...";
window.location.reload()
})
}
const getWeb3 = async () => {
return new Promise((resolve, reject) => {
if(document.readyState=="complete")
{
if (window.ethereum) {
const web3 = new Web3(window.ethereum)
window.location.reload()
resolve(web3)
} else {
reject("must install MetaMask")
document.getElementById("web3_message").textContent="Error: Porfavor conéctate a Metamask";
}
}else
{
window.addEventListener("load", async () => {
if (window.ethereum) {
const web3 = new Web3(window.ethereum)
resolve(web3)
} else {
reject("must install MetaMask")
document.getElementById("web3_message").textContent="Error: Please install Metamask";
}
});
}
});
};
const getContract = async (web3, address, abi_path) => {
const response = await fetch(abi_path);
const data = await response.json();
const netId = await web3.eth.net.getId();
contract = new web3.eth.Contract(
data,
address
);
return contract
}
async function loadDapp() {
metamaskReloadCallback()
document.getElementById("web3_message").textContent="Please connect to Metamask"
var awaitWeb3 = async function () {
web3 = await getWeb3()
web3.eth.net.getId((err, netId) => {
if (netId == NETWORK_ID) {
var awaitContract = async function () {
huracanContract = await getContract(web3, HURACAN_ADDRESS, HURACAN_ABI_PATH)
poseidonContract = await getContract(web3, POSEIDON_ADDRESS, POSEIDON_ABI_PATH)
document.getElementById("web3_message").textContent="You are connected to Metamask"
onContractInitCallback()
web3.eth.getAccounts(function(err, _accounts){
accounts = _accounts
if (err != null)
{
console.error("An error occurred: "+err)
} else if (accounts.length > 0)
{
onWalletConnectedCallback()
document.getElementById("account_address").style.display = "block"
} else
{
document.getElementById("connect_button").style.display = "block"
}
});
};
awaitContract();
} else {
document.getElementById("web3_message").textContent="Please connect to Goerli";
}
});
};
awaitWeb3();
}
async function connectWallet() {
await window.ethereum.request({ method: "eth_requestAccounts" })
accounts = await web3.eth.getAccounts()
onWalletConnectedCallback()
}
loadDapp()
const onContractInitCallback = async () => {
document.getElementById("web3_message").textContent="Reading merkle tree data...";
leaves = []
let i =0
let maxSize = await huracanContract.methods.MAX_SIZE().call()
for(let i=0; i<maxSize; i++)
{
leaves.push(await huracanContract.methods.commitments(i).call())
}
document.getElementById("web3_message").textContent="All ready!";
}
const onWalletConnectedCallback = async () => {
}
//// Functions ////
const deposit = async (depositPrivateKey, depositNullifier) => {
let commitment = await poseidonContract.methods.poseidon([depositPrivateKey,depositNullifier]).call()
let value = await huracanContract.methods.NOTE_VALUE().call()
document.getElementById("web3_message").textContent="Please confirm transaction.";
const result = await huracanContract.methods.deposit(commitment)
.send({ from: accounts[0], gas: 0, value: value })
.on('transactionHash', function(hash){
document.getElementById("web3_message").textContent="Executing...";
})
.on('receipt', function(receipt){
document.getElementById("web3_message").textContent="Success."; })
.catch((revertReason) => {
console.log("ERROR! Transaction reverted: " + revertReason.receipt.transactionHash)
});
}
const withdraw = async (privateKey, nullifier, recipient) => {
document.getElementById("web3_message").textContent="Generating proof...";
let commitment = await poseidonContract.methods.poseidon([privateKey,nullifier]).call()
let index = null
for(let i=0; i<leaves.length;i++)
{
if(commitment == leaves[i])
{
index = i
}
}
if(index == null)
{
console.log("Commitment not found in merkle tree")
return
}
let root = await huracanContract.methods.root().call()
let proof = await getWithdrawalProof(index, privateKey, nullifier, recipient, root)
await sendProofToRelayer(proof.pA, proof.pB, proof.pC, proof.publicSignals)
}
const sendProofToRelayer = async (pA, pB, pC, publicSignals) => {
fetch(RELAYER_URL + "/relay?pA=" + pA + "&pB=" + pB + "&pC=" + pC + "&publicSignals=" + publicSignals)
.then(res => res.json())
.then(out =>
console.log(out))
.catch();
}
```
Finalmente el archivo que contiene la logica relacionada a ZK. Este archivo se encarga de generar las pruebas ZK.
`js/zk_stuff.js`
```js
async function getMerklePath(leaves) {
if (leaves.length === 0) {
throw new Error('Leaves array is empty');
}
let layers = [leaves];
// Build the Merkle tree
while (layers[layers.length - 1].length > 1) {
const currentLayer = layers[layers.length - 1];
const nextLayer = [];
for (let i = 0; i < currentLayer.length; i += 2) {
const left = currentLayer[i];
const right = currentLayer[i + 1] ? currentLayer[i + 1] : left; // Handle odd number of nodes
nextLayer.push(await poseidonContract.methods.poseidon([left,right]).call())
}
layers.push(nextLayer);
}
const root = layers[layers.length - 1][0];
function getPath(leafIndex) {
let pathElements = [];
let pathIndices = [];
let currentIndex = leafIndex;
for (let i = 0; i < layers.length - 1; i++) {
const currentLayer = layers[i];
const isLeftNode = currentIndex % 2 === 0;
const siblingIndex = isLeftNode ? currentIndex + 1 : currentIndex - 1;
pathIndices.push(isLeftNode ? 0 : 1);
pathElements.push(siblingIndex < currentLayer.length ? currentLayer[siblingIndex] : currentLayer[currentIndex]);
currentIndex = Math.floor(currentIndex / 2);
}
return {
PathElements: pathElements,
PathIndices: pathIndices
};
}
// You can get the path for any leaf index by calling getPath(leafIndex)
return {
getMerklePathForLeaf: getPath,
root: root
};
}
function addressToUint(address) {
const hexString = address.replace(/^0x/, '');
const uint = BigInt('0x' + hexString);
return uint;
}
async function getWithdrawalProof(index, privateKey, nullifier, recipient, root) {
let merklePath = await getMerklePath(leaves)
let pathElements = merklePath.getMerklePathForLeaf(index).PathElements;
let pathIndices = merklePath.getMerklePathForLeaf(index).PathIndices;
let proverParams = {
"privateKey": privateKey,
"nullifier": nullifier,
"recipient": addressToUint(recipient),
"root": root,
"pathElements": pathElements,
"pathIndices": pathIndices
}
const { proof, publicSignals } = await snarkjs.groth16.fullProve(
proverParams,
"../zk_artifacts/proveWithdrawal.wasm", "../zk_artifacts/proveWithdrawal_final.zkey"
);
let pA = proof.pi_a
pA.pop()
let pB = proof.pi_b
pB.pop()
let pC = proof.pi_c
pC.pop()
document.getElementById("web3_message").textContent="Proof generated please confirm transaction.";
return {
pA: pA,
pB: pB,
pC: pC,
publicSignals: publicSignals
}
}
```
### 5. El relayer
¿De qué sirve usar pruebas de anonimato zk si al final nosotros mismos la ejecutamos? Si hacemos esto perderíamos la privacidad porque en Ethereum todo es público. Por eso ocupamos a un relayer. Un intermediario que ejecuta la transacción on-chain a nombre de del usuario anónimo.
Iniciamos creando el archivo del backend.
`relayer.mjs`
```js
import fs from "fs"
import cors from "cors"
import express from "express"
import { ethers } from 'ethers';
const app = express()
app.use(cors())
const JSON_CONTRACT_PATH = "./json_abi/Huracan.json"
const CHAIN_ID = "534351"
const PORT = 8080
var contract
var provider
var signer
const { RPC_URL, HURACAN_ADDRESS, RELAYER_PRIVATE_KEY, RELAYER_ADDRESS } = process.env;
const loadContract = async (data) => {
data = JSON.parse(data);
contract = new ethers.Contract(HURACAN_ADDRESS, data, signer);
}
async function initAPI() {
provider = new ethers.JsonRpcProvider(RPC_URL);
signer = new ethers.Wallet(RELAYER_PRIVATE_KEY, provider);
fs.readFile(JSON_CONTRACT_PATH, 'utf8', function (err,data) {
if (err) {
return console.log(err);
}
loadContract(data)
});
app.listen(PORT, () => {
console.log(`Listening to port ${PORT}`)
})
}
async function relayMessage(pA, pB, pC, publicSignals)
{
console.log(pA)
console.log(pB)
console.log(pC)
console.log(publicSignals)
const transaction = {
from: RELAYER_ADDRESS,
to: HURACAN_ADDRESS,
value: '0',
gasPrice: "700000000", // 0.7 gwei
nonce: await provider.getTransactionCount(RELAYER_ADDRESS),
chainId: CHAIN_ID,
data: contract.interface.encodeFunctionData(
"withdraw",[pA, pB, pC, publicSignals]
)
};
const signedTransaction = await signer.populateTransaction(transaction);
const transactionResponse = await signer.sendTransaction(signedTransaction);
console.log('🎉 The hash of your transaction is:', transactionResponse.hash);
}
app.get('/relay', (req, res) => {
console.log(req)
var pA = req.query["pA"].split(',')
var pBTemp = req.query["pB"].split(',')
const pB = [
[pBTemp[0], pBTemp[1]],
[pBTemp[2], pBTemp[3]]
];
var pC = req.query["pC"].split(',')
var publicSignals = req.query["publicSignals"].split(',')
relayMessage(pA, pB, pC, publicSignals)
res.setHeader('Content-Type', 'application/json');
res.send({
"message": "the proof was relayed"
})
})
initAPI()
```
Instala la librería de coors para puedas correr el relayer de manera local.
```
npm install cors
```
Ahora lanza el servidor reemplazando `TUURLRPC`, `TUHURACANADDRESS`, `TULLAVEPRIVADA`, `TUADDRESS` en el comandos a continuación.
```bash
RPC_URL=TUURLRPC HURACAN_ADDRESS=TUHURACANADDRESS RELAYER_PRIVATE_KEY=TULLAVEPRIVADA RELAYER_ADDRESS=TUADDRESS node relayer.mjs
```
Ahora estás listo para depositar y retirar fondos en Huracán desde la interfaz web.
![Huracán web](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0gfsc81pgjfcyt32gyu.png)
## 6. ¿Cómo llevar Huracán a producción?
**a. Guarda el histórico de raíces on-chain**
Al solo estar guardando la raíz más reciente, la prueba generada debe usar esta. Esto significa que si justo después de generar una prueba de retiro alguien hace un depósito, y por consiguiente modifica la raíz, la prueba generada será inválida y se tendrá que generar una nueva.
_Cambios necesarios:_ Guardar todo el histórico de las raíces on chain en, por ejemplo un mapping `mapping(uint id => uint root) public roots;` y al momento de generar una prueba usar la más reciente. Si alguien hace un depósito y cambia la raíz no habrá problema pues la verificación se hará ente cualquier raíz que haya sido guardada históricamente usando una función, por ejemplo `isKnownRoot(uint root)`.
**b. Indexa el merkle tree en un lugar accesible**
Para generar una prueba de inclusión, ocupamos leer el estado actual del árbol. Actualmente lo leemos de la variable `commitments` pero este proceso es lento y requiere muchas llamadas RPC si el tamaño del árbol es grande.
_Cambios necesarios:_ Almacenar e indexar en un lugar accesible la totalidad del árbol. Pienso que el lugar ideal para esto es un subgraph.
**c. Incentiva al relayer**
Es necesario ofrecer una recompensa al relayer pues él es quien paga las comisiones de transacción on-chain.
_Cambios necesarios:_ Al momento de generar la prueba, otorga un porcentaje de la note al relayer. Puedes hacer esto agregándo un parámetro extra en los circuitos, por ejemplo `signal input fee;` y en solidity enviar ese valor al `msg.sender` o a quien el relayer determine.
**d. Usa librerías adecuadas**
En la webapp, en vez de html y js vainilla deberías usar un frontend framework como react, angular o vue para ofrecer una mejor experiencia a los usuarios y desarrolladores.
En el relayer, en vez de express se debería de usar un backend más robusto y hostearlo en una máquina equipada para soportar un número de transacciones alto y con mecanismos anti-DoS, con un firewall adecuado ya que los fondos del relayer para usar como gas son un motivo para hackear el servidor.
**e. Define el tamaño del merkle tree**
Este ejemplo funciona para 4 depositantes, tendrás que reflejar los cambios en el circuito y contrato para hacerlo funcionar.
_Cambios necesarios:_ Comienza cámbiando [la cantidad de niveles en el circuito](https://github.com/Turupawn/Huracan/blob/master/circuits/proveWithdrawal.circom#L67) actualmente está establecida en 2 pues es la cantidad necesaria para un árbol con 4 hojas. Además actualiza las constantes `LEVELS` y `MAX_SIZE` [en el contrato](https://github.com/Turupawn/Huracan/blob/master/contracts/Huracan.sol#L16). Si tu árbol es muy grande, puedes ahorrar gas en el lanzamiento hardcodeando los valores default de un árbol vacío [en vez de usar un ciclo](https://github.com/Turupawn/Huracan/blob/master/contracts/Huracan.sol#L33) como lo he mostrado.
**f. Recuerda, muy importante, todo lo que hemos usado está en etapas experimentales**
Los circuitos y contratos en esta guía no están debidamente auditados, al igual que las librerías que se usan. Como por ejemplo Poseidon, que es una nueva función de hasheo que es muy prometedora y que usamos en vez del tradicional Pedersen.
Y también recuerda, en este tutorial no hemos hecho una trusted setup segura. Para hacer esto es recomendado hacer una ceremonia abierta, con tiempo suficiente para abrir la participación.
## 7. Ideas para profundizar
**a. Pruebas de exclusión**
De la misma manera que en este ejemplo manejamos pruebas de inclusión, podemos hacer pruebas de exclusión que _demuestren que que no somos parte de un grupo blacklisteado_. Esto puede ayudar a ser compatibles con futuras regulaciones que determinen los estados.
**b. Usar ERC20s en vez de Ether**
En vez de usar la ether como moneda nativa en Solidity, podemos usar un ERC20 específico. Los cambios únicamente se harán en los contratos de solidity y webapp. Pues los circuitos pueden quedar exactamente igual.
**c. Experimentar con Re-staking**
Una vez integres ERC20s me parece que una buen paso para continuar es experimentar con la generación de ganancias pasivas usando LSTs.
**d. ¡Piensa en otros casos de uso!**
Las pruebas de inclusión anónimas tienen muchos casos de uso, inclusive fuera de DeFi. Piensa cómo puedes implementar lo que has aprendido en sistemas de votos y gobernanza, redes sociales, videojuegos, etc...
**¡Gracias por leer esta guía!**
Sígueme en dev.to y en [Youtube](https://www.youtube.com/channel/UCNRB4tgwp09z4391JRjEsRA) para todo lo relacionado al desarrollo en Blockchain en Español. | turupawn |
|
1,913,035 | Stay Cool, Swing Better: Ultimate Guide to Golf Course Comfort | Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth... | 0 | 2024-07-05T17:06:07 | https://dev.to/earl_wood_475ab0c51fe29a5/stay-cool-swing-better-ultimate-guide-to-golf-course-comfort-2477 | hat, sports, golf, cycling |
Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth on your back. But as the round progresses, so does the heat. Sweat drips down your brow, and suddenly, your focus wavers. Staying cool and comfortable on the golf course isn't just about enjoyment—it can directly impact your performance. Today, we're diving into the ultimate guide to achieving golf course comfort, so you can swing better, stay focused, and enjoy every moment on the course.
Hook: Ever wondered how pro golfers manage to stay cool and composed, even in scorching conditions? Let's uncover the secrets to maintaining peak performance through comfort.
The Problem: Golf is a game of precision and concentration, and discomfort from heat or improper attire can easily derail your game. Many golfers struggle with staying cool and protected from the sun while maintaining their focus on the course.
Objection Handling: You might be thinking, "Isn't golf just about skill and technique?" Absolutely, but comfort plays a crucial role in optimizing your performance. When you're distracted by heat or discomfort, it's harder to concentrate on your swing and strategy. Plus, staying cool can prevent fatigue and keep you sharp throughout all 18 holes.
Open Loops: So, how do you achieve golf course comfort that enhances your game? What are the best strategies and gear to keep you cool and focused? Stick with me as we explore practical tips and tools designed to elevate your comfort and performance on the fairway.
Let's break it down. First, dress smart: choose lightweight, moisture-wicking fabrics that allow your skin to breathe and evaporate sweat quickly. Invest in a good quality golf hat with UV protection to shield your face and neck from the sun's harmful rays. Hydration is key—keep a water bottle handy and sip regularly to maintain energy levels and focus.
Next, consider accessories like cooling towels or neck gaiters that can provide instant relief from heat. Plan your tee times strategically to avoid playing during the hottest parts of the day. Lastly, practice mindfulness techniques to stay mentally cool under pressure, ensuring your swing remains smooth and consistent.
Conclusion: achieving golf course comfort isn't just about comfort—it's about optimizing your performance and enjoying your time on the course to the fullest. By implementing these practical tips and strategies, you'll be equipped to stay cool, swing better, and ultimately, elevate your golf game. So, gear up, stay hydrated, and embrace the comfort that leads to your best rounds yet!
Ready to enhance your golfing experience? Explore our recommended gear and strategies for staying cool and focused on the course. Your next round could be your best yet—start improving your comfort and performance today. Go get em!
| earl_wood_475ab0c51fe29a5 |
1,913,034 | Stay Cool, Swing Better: Ultimate Guide to Golf Course Comfort | Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth... | 0 | 2024-07-05T17:06:06 | https://dev.to/earl_wood_475ab0c51fe29a5/stay-cool-swing-better-ultimate-guide-to-golf-course-comfort-28b3 | hat, sports, golf, cycling |
Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth on your back. But as the round progresses, so does the heat. Sweat drips down your brow, and suddenly, your focus wavers. Staying cool and comfortable on the golf course isn't just about enjoyment—it can directly impact your performance. Today, we're diving into the ultimate guide to achieving golf course comfort, so you can swing better, stay focused, and enjoy every moment on the course.
Hook: Ever wondered how pro golfers manage to stay cool and composed, even in scorching conditions? Let's uncover the secrets to maintaining peak performance through comfort.
The Problem: Golf is a game of precision and concentration, and discomfort from heat or improper attire can easily derail your game. Many golfers struggle with staying cool and protected from the sun while maintaining their focus on the course.
Objection Handling: You might be thinking, "Isn't golf just about skill and technique?" Absolutely, but comfort plays a crucial role in optimizing your performance. When you're distracted by heat or discomfort, it's harder to concentrate on your swing and strategy. Plus, staying cool can prevent fatigue and keep you sharp throughout all 18 holes.
Open Loops: So, how do you achieve golf course comfort that enhances your game? What are the best strategies and gear to keep you cool and focused? Stick with me as we explore practical tips and tools designed to elevate your comfort and performance on the fairway.
Let's break it down. First, dress smart: choose lightweight, moisture-wicking fabrics that allow your skin to breathe and evaporate sweat quickly. Invest in a good quality golf hat with UV protection to shield your face and neck from the sun's harmful rays. Hydration is key—keep a water bottle handy and sip regularly to maintain energy levels and focus.
Next, consider accessories like cooling towels or neck gaiters that can provide instant relief from heat. Plan your tee times strategically to avoid playing during the hottest parts of the day. Lastly, practice mindfulness techniques to stay mentally cool under pressure, ensuring your swing remains smooth and consistent.
Conclusion: achieving golf course comfort isn't just about comfort—it's about optimizing your performance and enjoying your time on the course to the fullest. By implementing these practical tips and strategies, you'll be equipped to stay cool, swing better, and ultimately, elevate your golf game. So, gear up, stay hydrated, and embrace the comfort that leads to your best rounds yet!
Ready to enhance your golfing experience? Explore our recommended gear and strategies for staying cool and focused on the course. Your next round could be your best yet—start improving your comfort and performance today. Go get em!
| earl_wood_475ab0c51fe29a5 |
1,913,033 | Stay Cool, Swing Better: Ultimate Guide to Golf Course Comfort | Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth... | 0 | 2024-07-05T17:06:06 | https://dev.to/earl_wood_475ab0c51fe29a5/stay-cool-swing-better-ultimate-guide-to-golf-course-comfort-30o8 | hat, sports, golf, cycling |
Introduction: Picture this: you're on the green, lining up the perfect putt, feeling the sun's warmth on your back. But as the round progresses, so does the heat. Sweat drips down your brow, and suddenly, your focus wavers. Staying cool and comfortable on the golf course isn't just about enjoyment—it can directly impact your performance. Today, we're diving into the ultimate guide to achieving golf course comfort, so you can swing better, stay focused, and enjoy every moment on the course.
Hook: Ever wondered how pro golfers manage to stay cool and composed, even in scorching conditions? Let's uncover the secrets to maintaining peak performance through comfort.
The Problem: Golf is a game of precision and concentration, and discomfort from heat or improper attire can easily derail your game. Many golfers struggle with staying cool and protected from the sun while maintaining their focus on the course.
Objection Handling: You might be thinking, "Isn't golf just about skill and technique?" Absolutely, but comfort plays a crucial role in optimizing your performance. When you're distracted by heat or discomfort, it's harder to concentrate on your swing and strategy. Plus, staying cool can prevent fatigue and keep you sharp throughout all 18 holes.
Open Loops: So, how do you achieve golf course comfort that enhances your game? What are the best strategies and gear to keep you cool and focused? Stick with me as we explore practical tips and tools designed to elevate your comfort and performance on the fairway.
Let's break it down. First, dress smart: choose lightweight, moisture-wicking fabrics that allow your skin to breathe and evaporate sweat quickly. Invest in a good quality golf hat with UV protection to shield your face and neck from the sun's harmful rays. Hydration is key—keep a water bottle handy and sip regularly to maintain energy levels and focus.
Next, consider accessories like cooling towels or neck gaiters that can provide instant relief from heat. Plan your tee times strategically to avoid playing during the hottest parts of the day. Lastly, practice mindfulness techniques to stay mentally cool under pressure, ensuring your swing remains smooth and consistent.
Conclusion: achieving golf course comfort isn't just about comfort—it's about optimizing your performance and enjoying your time on the course to the fullest. By implementing these practical tips and strategies, you'll be equipped to stay cool, swing better, and ultimately, elevate your golf game. So, gear up, stay hydrated, and embrace the comfort that leads to your best rounds yet!
Ready to enhance your golfing experience? Explore our recommended gear and strategies for staying cool and focused on the course. Your next round could be your best yet—start improving your comfort and performance today. Go get em!
| earl_wood_475ab0c51fe29a5 |
1,913,029 | Building a Personalized Nutrition Planning App with Strapi and Next.js | It is true that maintaining a healthy diet can be a challenge especially because we often find... | 0 | 2024-07-05T17:04:50 | https://dev.to/joanayebola/building-a-personalized-nutrition-planning-app-with-strapi-and-nextjs-fn1 | strapi, nextjs, webdev, beginners | It is true that maintaining a healthy diet can be a challenge especially because we often find ourselves short on time, unsure about what to eat, or struggling to track our nutritional intake. A nutrition planning app can be a valuable tool in overcoming these issues.
This guide will walk you through the process of building a nutrition planning app using Strapi, a user-friendly headless content management system (CMS), and Next.js, a powerful framework for building modern web applications. With these technologies, you can create a personalized app that helps you achieve your dietary goals.
This guide is designed for individuals with a basic understanding of web development concepts. We will provide clear explanations and step-by-step instructions, making the process accessible even for those new to Strapi and Next.js.
**Prerequisites**
To begin with, there are a few essential tools you'll need to have installed on your system.
1. Node.js and npm (or yarn)
Installing Node.js and npm (or yarn)
Both Node.js and npm (or yarn) are typically installed together as part of the Node.js download. Here's how to get them set up on your system:
* Download Node.js:
Head over to the official Node.js website: https://nodejs.org/en/download
Choose the appropriate installer for your operating system (Windows, macOS, or Linux).
![Screenshot (49)](https://hackmd.io/_uploads/SJC6OOUQR.png)
* Install Node.js:
Follow the on-screen instructions during the installation process.
In most cases, the default settings will be sufficient.
* Verify Installation:
Once the installation is complete, open your terminal or command prompt.
Type the following commands and press Enter after each:
``` bash
node -v
npm -v
```
These commands should display the installed versions of Node.js and npm, confirming successful installation.
**Alternatively, using yarn:**
If you prefer using yarn as your package manager, you can install it globally after installing Node.js:
``` bash
npm install -g yarn
```
Then, verify the installation by running:
``` bash
yarn --version
```
By following these steps, you'll have Node.js, npm, and optionally yarn ready to use for building your nutrition planning app and other JavaScript projects.
2. Creating a Next.js project using create-next-app command
Now that you have Node.js and npm (or yarn) set up, let's create the foundation for your nutrition planning app using Next.js. Here's how to do it with the `create-next-app` command:
* Open your terminal or command prompt.
* Navigate to the directory where you want to create your project. You can use the `cd` command to change directories. For example:
```bash
cd Documents/MyProjects
```
* Run the following command to create a new Next.js project:
```bash
npx create-next-app@latest my-nutrition-app
```
* `npx`: This is a tool included with npm that allows you to execute packages without installing them globally.
* `create-next-app@latest`: This is the command that initiates the project creation process. You can also specify a specific version of `create-next-app` if needed, but `@latest` ensures you're using the most recent stable version.
* `my-nutrition-app`: Replace this with your desired project name.
* Press Enter.
![Screenshot (52)](https://hackmd.io/_uploads/r1KGTY8QC.png)
The `create-next-app` command will download the necessary dependencies and set up the basic project structure for your Next.js application. This process might take a few moments.
Once finished, you'll see a success message in your terminal indicating that your project has been created.
![success](https://hackmd.io/_uploads/ryjsD6d7R.png)
## Initializing the Strapi project:
Navigate to the root directory of your Next.js project (cd my-nutrition-app).
Use `npx create-strapi-app@latest strapi-api`.
This creates a Strapi API project named **strapi-api** within your Next.js project.
![Strapi Api](https://hackmd.io/_uploads/HJ5Bcpd7C.png)
In the case of Strapi, when you initialize a new project using `npx create-strapi-app@latest strapi-api`, the Strapi CLI takes care of installing the necessary dependencies for you. This includes the core Strapi framework, database drivers (if applicable), and other packages required for Strapi to function.
So, you typically **don't need to manually install dependencies** after initialization. The `npm install` command is usually run during the initialization process itself.
Here's what happens during initialization:
1. **Downloads Packages:** The Strapi CLI downloads the required packages from the npm registry and installs them in your Strapi project's `node_modules` directory.
2. **Configures Project:** The CLI configures your project based on your choices during initialization (e.g., database type). This involves setting up configuration files.
The address displayed after initialization (`http://localhost:1337/admin` by default) indicates that Strapi has started successfully and the admin panel is accessible. This confirms that the dependencies were installed and configured correctly.
**Exceptions:**
- **Custom Dependencies:** If you plan to use additional functionalities beyond the core Strapi features, you might need to install specific npm packages for those features within your Strapi project.
- **Manual Installation:** If you encounter issues during initialization or prefer a more manual approach, you can install Strapi globally using `npx create-strapi-app@latest my-project
` and then create your project using `strapi new strapi-api`. However, this global installation is generally not recommended for managing individual project dependencies.
You can access the Strapi admin panel at http://localhost:1337/admin.
## Getting Started with Strapi
When you start a Strapi server, you'll see a login page because Strapi provides an admin panel for managing your content and configurations. This is a secure way to access and modify your Strapi data.
![Strapi Login Page](https://hackmd.io/_uploads/BkTQoaOmA.png)
**Login with your credentials:** If you haven't set up any user accounts yet, you'll likely find default credentials in the Strapi documentation or project setup instructions.
These default credentials are usually for initial setup purposes and should be changed for security reasons.
Quick Start Guide: https://docs.strapi.io/dev-docs/quick-start
Installation: https://docs.strapi.io/dev-docs/installation
Setup and Deployment: https://docs.strapi.io/dev-docs/quick-start
You can refeer to these documentations to guide you with setting up Strapi
## Defining Strapi Content Types for Your Nutrition App
Strapi uses content types, also known as models, to define the structure of your data. These models will represent the different entities in your nutrition planning app. Here's how we'll set them up for our project:
1. **Navigate to your Strapi project directory:** Use the `cd` command in your terminal to change directories, for example:
```bash
cd my-strapi-api
```
2. **Start the Strapi development server:**
```bash
strapi develop
```
This command will launch the Strapi admin panel in your web browser, typically accessible at http://localhost:1337/.
3. **Access the Content-Type Builder:**
In the Strapi admin panel, navigate to the **Content-Type Builder** section (usually found in the left sidebar).
![Strapi Homepage](https://hackmd.io/_uploads/ryZs6TOQC.png)
4. **Create each content type:**
* Click on **"Create a new collection/single type"**.
![Creating Collection Type](https://hackmd.io/_uploads/SybhCpd7R.png)
* Choose whether it's a **Collection Type** (for Users, Foods, Meals) or a **Single Type** (for Daily Plans) based on the model description above.
* Define the name and attributes for each content type, matching the ones listed previously.
* Save the content type after adding attributes.
## Creating Content Types in Strapi for your Nutrition App
Here's a guide on creating the content types you mentioned for your Personalized Nutrition Planning App in Strapi:
We'll create each content type (Food, Meal, Plan) one by one:
**a. User:**
- Attributes:
- Username (Text, unique)
- Email (Email, unique)
- Password (Password)
- **(Optional for personalization):**
- Age (Number)
- Weight (Number)
- Height (Number)
- Activity Level (Text options: Sedentary, Lightly Active, Moderately Active, Very Active)
- Dietary Restrictions (Text or JSON format for multiple restrictions)
- Goals (Text or JSON format for multiple goals: Weight Loss, Muscle Gain, Maintain Weight
Strapi comes with a pre-built "User" content type by default. This is a common approach in content management systems (CMS) like Strapi.
Since you already see the User content type in your Strapi admin panel, you don't need to create one from scratch as we will be utilizing the default user content type
**b. Food:**
1. Go to the **Collection types** sub-navigation.
2. Click on **Create a new collection type**.
3. In the **Display name** field, enter "Food".
4. Leave the **API ID (singular)** and **API ID (plural)** pre-filled values (usually "food" and "foods").
5. Now, define the attributes for Food:
* Click **Add another field**.
* In the **Name** field, enter "Name".
* Select **Short Text** as the data type.
* Repeat for the following attributes with their corresponding data types:
* Description (Text)
* Calories (Number)
* Protein (Number)
* Carbs (Number)
* Fat (Number)
* For **Micronutrients**, create a new field with:
* Name: "Micronutrients"
* Data type: **JSON**. (This allows storing various micronutrient values as a key-value pair)
* For **Image**, create a new field with:
* Name: "Image"
* Data type: **Media**. (This allows uploading an image for the food item)
![Strapi Food](https://hackmd.io/_uploads/Hy6WQ-YQC.png)
**c. Meal:**
1. Go to the **Collection types** sub-navigation.
2. Click on **Create a new collection type**.
3. In the **Display name** field, enter "Meal".
4. Leave the **API ID (singular)** and **API ID (plural)** pre-filled values (usually "meal" and "meals").
5. Define the attributes for Meal:
* Name (Text)
* Description (Text) (optional)
* Foods (Relation):
* Click **Add another field**.
* Name: "Foods"
* Data type: **Relation**.
* Select "foods" (the Food collection type) in the **Target collection** dropdown. (This allows linking multiple food items to a meal)
* Total Calories (Number) (This will be a calculated field based on associated foods, we'll configure this later)
* Total Macronutrients (Number) (This will also be calculated based on associated foods)
**d. Plan:**
1. Go to the **Collection types** sub-navigation.
2. Click on **Create a new collection type**.
3. In the **Display name** field, enter "Plan".
4. Leave the **API ID (singular)** and **API ID (plural)** pre-filled values (usually "plan" and "plans").
5. Define the attributes for Plan:
* User (Relation):
* Click **Add another field**.
* Name: "User"
* Data type: **Relation**.
* Select "users" (assuming you have a User collection type) in the **Target collection** dropdown. (This links a plan to a specific user)
* Name (Text)
* Description (Text) (optional)
* Start Date (Date)
* End Date (Date) (optional)
* Meals (Relation):
* Click **Add another field**.
* Name: "Meals"
* Data type: **Relation**.
* Select "meals" (the Meal collection type) in the **Target collection** dropdown. (This allows linking multiple meals to a plan)
* Total Daily Calories (Number) (This will be a calculated field based on associated meals, we'll configure this later)
* Total Daily Macronutrients (Number) (This will also be calculated based on associated meals)
* **Optional for personalization:**
* Target Daily Calories (Number)
* Target Daily Macronutrient Ratios (JSON): This can be another field with JSON data type to define percentages for protein, carbs, and fat.
**. Saving and Defining Relations:**
* Once you've defined all the attributes for each content type, click **Save**.
* Strapi will create the content types with the specified attributes.
**Personalization Considerations:**
- The "User" content type captures information that allows for personalized recommendations. Utilize the optional fields to gather details about a user's activity level, dietary restrictions, and goals.
- When creating "Meals" and "Plans," consider allowing users to create custom meals and plans. You can also pre-populate some plans with sample meals based on different dietary needs or goals.
- The "Plan" content type's optional fields (Target Daily Calories and Macronutrient Ratios) enable you to calculate these values based on the user's profile and goals, creating a personalized meal plan.
**Note:**
- You can further customize these content types based on your specific app features.
- Explore Strapi's documentation ([https://docs.strapi.io/](https://docs.strapi.io/)) for detailed explanations of attribute types and functionalities.
## Strapi API Development: Connecting Next.js to Your Strapi Backend
Now that you have Strapi set up with the necessary content types, let's connect your Next.js frontend to this powerful API. Here's what we need to do:
**1. Setting Up Environment Variables for Strapi URL:**
To access your Strapi API from Next.js, you'll need to store the Strapi URL as an environment variable. This keeps your API endpoint configuration separate from your code and allows for easy management.
There are two main approaches to setting environment variables:
* **.env File:** This is a common approach for development environments. Create a file named `.env.local` in the root directory of your Next.js project (**my-nutrition-app**). Inside this file, define a variable named `STRAPI_URL` with the value of your Strapi API endpoint URL. For example:
```bash
STRAPI_URL=http://localhost:1337 # Assuming Strapi runs on localhost
```
**Important:** Remember to **exclude the .env file from version control** (e.g., Git) to avoid storing sensitive API URLs publicly.
* **System Environment Variables:** This approach is more suitable for production environments. You can set environment variables directly on your server or hosting platform. Consult the documentation for your specific hosting provider for instructions on setting environment variables.
## Connecting Next.js to Strapi API
Once you have the Strapi URL stored as an environment variable, you can use it within your Next.js components to fetch data from your Strapi API. Here are two common methods:
* **`getStaticProps`:** This function is used for pre-rendering data at build time. It's ideal for static pages that don't require frequent updates.
```javascript
// Example: Fetching all foods in a component
export async function getStaticProps() {
const response = await fetch(`${process.env.NEXT_PUBLIC_STRAPI_URL}/foods`);
const foods = await response.json();
return {
props: { foods },
};
}
function MyComponent({ foods }) {
// Use the fetched foods data in your component
}
```
* **`getServerSideProps`:** This function fetches data on every request to the server. It's useful for dynamic pages that require up-to-date information.
```javascript
// Example: Fetching daily plans for a specific user
export async function getServerSideProps(context) {
const userId = context.params.userId;
const response = await fetch(`${process.env.NEXT_PUBLIC_STRAPI_URL}/daily-plans?user=${userId}`);
const dailyPlans = await response.json();
return {
props: { dailyPlans },
};
}
function MyComponent({ dailyPlans }) {
// Use the fetched daily plans data in your component
}
```
The choice between `getStaticProps` and `getServerSideProps` depends on your specific needs. Use `getStaticProps` for static content that doesn't change frequently. If you require dynamic updates based on user interaction or other factors, `getServerSideProps` is a better option.
For the sake of this tutorial, let's use `getStaticProps` to keep things simpler and focus on the core concepts.
## User Authentication (Optional): Integrating Strapi and Next.js
While user accounts aren't essential for a basic nutrition planning app, they can unlock features like personalized meal plans and progress tracking. This section explores implementing user authentication with Strapi and Next.js (optional).
**1. Setting Up Strapi User Accounts:**
If you choose to include user accounts, you'll need to enable the Users & Permissions plugin in Strapi:
* In the Strapi admin panel, navigate to **Settings** > **Plugins**.
* Find the **Users & Permissions** plugin and click **Install**.
This plugin provides user registration, login functionalities, and JWT (JSON Web Token) based authentication. You'll need to define the User model within Strapi, including attributes like username, email, and password.
**2. Implementing User Registration and Login in Next.js:**
Strapi's user functionalities are exposed through API endpoints. You'll use Next.js components and libraries like Axios to interact with these endpoints:
* **Registration:** Create a form component that captures user information (username, email, password) and sends a POST request to the Strapi `/auth/local/register` endpoint.
* **Login:** Implement a login form that sends user credentials (email, password) to the Strapi `/auth/local` endpoint for authentication. Upon successful login, Strapi will return a JWT token.
**3. Storing JWT Tokens in Next.js:**
Once you receive the JWT token from Strapi upon login, you'll need to store it securely in Next.js. Here are two common approaches:
* **Local Storage:** You can store the JWT token in the user's browser local storage. This is a convenient option for simple scenarios but has security limitations (tokens are accessible to JavaScript code).
* **Cookies (HttpOnly Flag):** Setting the HttpOnly flag on a cookie prevents JavaScript from accessing it directly, offering better security for storing JWT tokens. However, this approach requires additional configuration in your Next.js app.
**4. Protecting Routes with Authorization:**
To protect specific routes or functionalities in your app that require user authentication, you can implement authorization checks using the stored JWT token. This involves checking if a valid token exists before rendering protected components or fetching sensitive data. Libraries like `next-auth/jwt` can simplify JWT authentication management in Next.js.
**Important Note:**
This section provides a high-level overview of user authentication. Implementing secure and robust authentication is a complex topic. Refer to the Strapi documentation and Next.js resources for more detailed instructions and best practices:
* Strapi User & Permissions Plugin: [https://strapi.io/blog/a-beginners-guide-to-authentication-and-authorization-in-strapi](https://strapi.io/blog/a-beginners-guide-to-authentication-and-authorization-in-strapi)
* NextAuth.js library: [https://next-auth.js.org/](https://next-auth.js.org/)
**Remember:** If you choose not to implement user accounts, you can skip this section and proceed to building the core functionalities of your nutrition planning app.
## Building the Next.js Frontend: Reusable Components
Now that you have the data flowing from Strapi to your Next.js application, let's focus on building the user interface using reusable components. Here's how to create some essential components for your nutrition planning app:
**1. Layouts (header, footer, navigation):**
These components will provide a consistent structure across all your app pages.
* Create a directory named `components` in your Next.js project's root directory (`my-nutrition-app/components`).
* Inside `components`, create a file named `Layout.js`. This will be your main layout component.
```javascript
// components/Layout.js
import React from 'react';
function Layout({ children }) {
return (
<div className="container">
<header>
<h1>My Nutrition Planner</h1>
{/* Navigation links can be added here */}
</header>
<main>{children}</main>
<footer>© Your Name or Company 2024</footer>
</div>
);
}
export default Layout;
```
* You can create separate components for specific navigation elements within the `Layout` component if needed.
**2. Food Cards (displaying food information):**
These components will display individual food items with details like name, calories, and macros.
* Create a file named `FoodCard.js` inside the `components` directory.
```javascript
// components/FoodCard.js
import React from 'react';
function FoodCard({ food }) {
return (
<div className="food-card">
<h3>{food.name}</h3>
<p>{food.calories} kcal</p>
<p>Carbs: {food.carbs}g Protein: {food.protein}g Fat: {food.fat}g</p>
{/* Add button or link for adding food to a meal plan (optional) */}
</div>
);
}
export default FoodCard;
```
**3. Meal Sections (breakfast, lunch, etc.):**
These components will represent individual meals within a daily plan, potentially containing a list of associated food cards.
* Create a file named `MealSection.js` inside the `components` directory.
```javascript
// components/MealSection.js
import React from 'react';
import FoodCard from './FoodCard'; // Import the FoodCard component
function MealSection({ title, foods }) {
return (
<section className="meal-section">
<h2>{title}</h2>
<ul>
{foods.map((food) => (
<li key={food.id}>
<FoodCard food={food} />
</li>
))}
</ul>
</section>
);
}
export default MealSection;
```
**4. Daily Plan Overview:**
This component will display the overall structure of a daily plan, potentially including meal sections and functionalities for adding/removing foods.
* Create a file named `DailyPlan.js` inside the `components` directory.
```javascript
// components/DailyPlan.js
import React, { useState } from 'react';
import MealSection from './MealSection'; // Import the MealSection component
function DailyPlan({ plan }) {
const [selectedFoods, setSelectedFoods] = useState([]); // State for selected foods
// Functions for adding/removing foods from the plan (implementation details omitted)
return (
<div className="daily-plan">
<h2>Daily Plan</h2>
{/* Display date or other relevant information about the plan */}
{plan.meals.map((meal) => (
<MealSection key={meal.id} title={meal.name} foods={selectedFoods.filter((food) => food.mealId === meal.id)} />
))}
{/* Buttons or functionalities for adding/removing foods and managing the plan */}
</div>
);
}
export default DailyPlan;
```
**Explanation:**
* These are basic examples, and you can customize them further with styling (CSS) and additional functionalities like user interactions for managing meal plans.
* Notice how we import and utilize the previously created components (`FoodCard` and `MealSection`) within these components, promoting reusability.
## Data Display and Management in Your Nutrition App
Now that you have the reusable components and data flowing from Strapi, let's explore how to display information and implement functionalities in your Next.js app.
**1. Populating Components with Fetched Data:**
We'll use the data fetched from Strapi using `getStaticProps` or potentially `getServerSideProps` (depending on your chosen approach) to populate your components. Here's an example:
* In a page component (e.g., `pages/plans/index.js`), you can fetch daily plans and then use the data within your `DailyPlan` component:
```javascript
// pages/plans/index.js
import { getStaticProps } from 'next';
import DailyPlan from '../../components/DailyPlan'; // Import the DailyPlan component
export async function getStaticProps() {
// Fetch daily plans data from Strapi
const response = await fetch(`${process.env.NEXT_PUBLIC_STRAPI_URL}/daily-plans`);
const dailyPlans = await response.json();
return {
props: { dailyPlans },
};
}
function PlansPage({ dailyPlans }) {
return (
<div>
<h1>My Daily Plans</h1>
{dailyPlans.map((plan) => (
<DailyPlan key={plan.id} plan={plan} />
))}
</div>
);
}
export default PlansPage;
```
* Within the `DailyPlan` component, you can access the `plan` prop containing the fetched data and use it to display details and associated meals with `MealSection` components.
**2. Adding/Removing Foods from Meals (Optional):**
This functionality requires managing the state of selected foods within a meal plan. Here's a basic example:
* In the `DailyPlan` component, you can introduce state for selected foods using `useState`:
```javascript
function DailyPlan({ plan }) {
const [selectedFoods, setSelectedFoods] = useState([]); // State for selected foods
const handleAddFood = (food) => {
setSelectedFoods([...selectedFoods, food]); // Add food to selected list
};
const handleRemoveFood = (foodId) => {
setSelectedFoods(selectedFoods.filter((food) => food.id !== foodId)); // Remove food from list
};
// ... (rest of the component)
}
```
* You can then pass these functions and the `selectedFoods` state as props to the `MealSection` component, allowing it to display the selected foods and potentially offer functionalities for adding/removing them based on user interaction.
**3. Creating/Editing Daily Plans:**
* This functionality involves creating forms or interfaces for users to add new daily plans and potentially edit existing ones.
* You'll need to implement form handling and logic to send data (new plan details) to your Strapi API for creation. Libraries like `Formik` or `React Hook Form` can simplify form management.
* Editing plans might involve fetching a specific plan's details, pre-populating a form with existing data, and sending updates back to Strapi upon user submission.
```js// pages/plans/edit.js
import React, { useState, useEffect } from 'react';
import axios from 'axios'; // Assuming using Axios for API calls
import { useRouter } from 'next/router'; // For routing after update
function EditPlanPage({ planId }) {
const router = useRouter();
const [date, setDate] = useState(''); // State for plan date
// Fetch plan details on initial render
useEffect(() => {
const fetchData = async () => {
try {
const response = await axios.get(`${process.env.NEXT_PUBLIC_STRAPI_URL}/daily-plans/${planId}`);
const plan = response.data;
setDate(plan.date); // Set fetched date
} catch (error) {
console.error('Error fetching plan:', error);
// Handle errors (e.g., redirect to error page)
}
};
fetchData();
}, [planId]); // Fetch data only on planId change
const handleSubmit = async (event) => {
event.preventDefault();
const updatedPlan = {
date,
};
try {
const response = await axios.put(`${process.env.NEXT_PUBLIC_STRAPI_URL}/daily-plans/${planId}`, updatedPlan);
console.log('Plan updated successfully:', response.data);
router.push('/plans'); // Redirect to plan list after successful update
} catch (error) {
console.error('Error updating plan:', error);
// Handle errors appropriately (e.g., display error message)
}
};
return (
<div>
<h1>Edit Daily Plan</h1>
<form onSubmit={handleSubmit}>
<label htmlFor="date">Date:</label>
<input type="date" id="date" value={date} onChange={(e) => setDate(e.target.value)} />
<button type="submit">Update Plan</button>
</form>
</div>
);
}
export async function getServerSideProps(context) {
const { planId } = context.params;
return {
props: {
planId,
},
};
}
export default EditPlanPage;
```
**4. Tracking Nutritional Goals (Optional):**
* This feature requires additional functionalities. You might need to:
* Define nutritional goals (calories, macros) for users.
* Calculate and display total nutrient intake based on selected foods within a meal plan.
* Potentially store and visualize progress over time.
```js
// components/DailyPlan.js (modified)
import React, { useState } from 'react';
import MealSection from './MealSection'; // Import the MealSection component
function DailyPlan({ plan, foods }) {
const [selectedFoods, setSelectedFoods] = useState([]); // State for selected foods
const handleAddFood = (food) => {
setSelectedFoods([...selectedFoods, food]); // Add food to selected list
};
const handleRemoveFood = (foodId) => {
setSelectedFoods(selectedFoods.filter((food) => food.id !== foodId)); // Remove food from list
};
const calculateTotalNutrients = () => {
let totalCalories = 0;
let totalCarbs = 0;
let totalProtein = 0;
let totalFat = 0;
selectedFoods.forEach((food) => {
totalCalories += food.calories;
totalCarbs += food.carbs;
totalProtein += food.protein;
totalFat += food.fat;
});
return { calories: totalCalories, carbs: totalCarbs, protein: totalProtein, fat: totalFat };
};
const nutrients = calculateTotalNutrients();
return (
<div className="daily-plan">
<h2>Daily Plan - {plan.date}</h2>
{/* Display other plan details (optional) */}
{plan.meals.map((meal) => (
<MealSection key={meal.id} title={meal.name} foods={selectedFoods.filter((food) => food.mealId === meal.id)} onAddFood={handleAddFood} onRemoveFood={handleRemoveFood} />
))}
{/* Buttons or functionalities for adding/removing foods and managing the plan */}
<div className="nutrient-summary">
<h3>Nutrient Summary</h3>
<p>Calories: {nutrients.calories} kcal</p>
<p>Carbs: {nutrients.carbs}g
```
**Note:**
* These are simplified examples, and you'll need to implement the logic for data manipulation, user interactions, and API calls based on your specific requirements.
* Consider using state management libraries like Redux or Zustand for managing complex application state across components.
## Integrating with External Food Databases (USDA)
While Strapi can manage your own custom food data, you can also leverage external food databases like the USDA FoodData Central API ([https://www.ers.usda.gov/developer/data-apis/](https://www.ers.usda.gov/developer/data-apis/)) to enrich your app's functionality. Here's an approach to integrate with the USDA API:
**1. USDA FoodData Central API:**
The USDA FoodData Central API provides a vast dataset of standardized food information, including nutrients, descriptions, and standard units.
**2. API Access and Calls:**
* You'll need to register for an API key from the USDA website ([https://www.ers.usda.gov/developer/data-apis/](https://www.ers.usda.gov/developer/data-apis/)).
* The USDA API uses a RESTful architecture, allowing you to make HTTP requests to retrieve data based on your needs.
**3. Example Code (using Axios):**
```javascript
import axios from 'axios';
const USDA_API_URL = 'https://fdc.nal.usda.gov/api/foods'; // Base URL for USDA API
const YOUR_API_KEY = 'YOUR_USDA_API_KEY'; // Replace with your actual API key
async function searchUSDAFoods(searchTerm) {
const params = {
api_key: YOUR_API_KEY,
q: searchTerm,
sort: 'n:asc', // Sort by name ascending (optional)
pageSize: 10, // Limit results per page (optional)
};
try {
const response = await axios.get(USDA_API_URL, { params });
const foods = response.data.foods;
return foods;
} catch (error) {
console.error('Error fetching USDA foods:', error);
// Handle errors appropriately (e.g., display an error message)
return [];
}
}
// Example usage:
const searchTerm = 'apple';
searchUSDAFoods(searchTerm).then((foods) => {
console.log('USDA Foods search results:', foods);
// Use the fetched food data (e.g., display search results)
});
```
* This code defines the USDA API URL and utilizes your API key.
* The `searchUSDAFoods` function takes a search term and builds the API request parameters.
* It retrieves food data based on the search term and returns an array of results.
* Remember to replace `YOUR_USDA_API_KEY` with your actual key.
**Important Considerations:**
* The USDA API has usage limits and terms of service. Ensure you comply with their guidelines.
* Consider implementing search debouncing to avoid overwhelming the API with excessive requests.
* You might need additional logic to handle potential discrepancies between your Strapi food data and the USDA data (e.g., matching based on identifiers or names).
## Implementing a Meal Suggestion Engine
Enhancing your nutrition planning app with a meal suggestion engine based on user preferences and goals can significantly improve its value proposition. Here's a breakdown of the steps involved:
**1. User Preferences and Goals:**
* Collect user data regarding dietary restrictions (vegetarian, vegan, etc.), allergies, and food preferences (dislikes, favorite cuisines).
* Allow users to set goals like weight management (calories) or targeted nutrient intake (protein, carbs, fat).
```js
// components/UserProfile.js
import React, { useState } from 'react';
function UserProfile() {
const [preferences, setPreferences] = useState({
dietaryRestrictions: [], // List of dietary restrictions (vegetarian, vegan, etc.)
allergies: [], // List of allergies
dislikes: [], // List of disliked foods
favoriteCuisines: [], // List of favorite cuisines
});
const [goals, setGoals] = useState({
calories: null, // Daily calorie target
macros: {
protein: null, // Target protein intake
carbs: null, // Target carbs intake
fat: null, // Target fat intake
},
});
// Handle user input for preferences and goals (form fields or selection components)
return (
<div>
<h2>User Profile</h2>
{/* Form or components to capture user preferences and goals */}
</div>
);
}
export default UserProfile;
```
**2. Food Data Integration:**
* Utilize your existing Strapi food data and potentially integrate with external databases like USDA (as discussed previously).
``` js
// utils/foodData.js
import axios from 'axios';
const USDA_API_URL = 'https://fdc.nal.usda.gov/api/foods'; // Base URL for USDA API
async function fetchStrapiFoods() {
const response = await fetch(`${process.env.NEXT_PUBLIC_STRAPI_URL}/foods`);
return response.json();
}
async function searchUSDAFoods(searchTerm) {
const params = {
api_key: YOUR_USDA_API_KEY,
q: searchTerm,
};
try {
const response = await axios.get(USDA_API_URL, { params });
return response.data.foods;
} catch (error) {
console.error('Error fetching USDA foods:', error);
return [];
}
}
export { fetchStrapiFoods, searchUSDAFoods };
```
This code defines functions for fetching both Strapi food data and searching the USDA API (replace YOUR_USDA_API_KEY with your actual key).
* Ensure your food data includes relevant information like calories, macronutrients (carbs, protein, fat), and potentially micronutrients (vitamins, minerals).
**3. Meal Planning Algorithm:**
The core of your suggestion engine is a logic that recommends meals based on user preferences and goals:
- **Filter Foods:** Based on user preferences, filter out foods that don't fit their dietary restrictions or allergies.
```js
function filterFoods(foods, userPreferences) {
return foods.filter((food) => {
// Check if food aligns with dietary restrictions, allergies, and dislikes
const restrictionsMet = !userPreferences.dietaryRestrictions.includes(food.dietaryRestriction);
const noAllergens = !userPreferences.allergies.some((allergen) => food.allergens.includes(allergen));
const notDisliked = !userPreferences.dislikes.includes(food.name);
return restrictionsMet && noAllergens && notDisliked;
});
}
```
This function filters the food data based on user preferences.
- **Prioritize Based on Goals:** If the user has set goals (e.g., calorie deficit for weight loss), prioritize foods that align with those goals. You can implement scoring mechanisms based on calorie/nutrient content.
```js
function prioritizeFoods(foods, goals) {
// Implement logic based on your goal criteria (e.g., calorie deficit)
// Here's a simplified example prioritizing lower calorie foods:
return foods.sort((food1, food2) => food1.calories - food2.calories);
}
```
- **Meal Composition:** Consider building balanced meals with appropriate proportions of macronutrients (e.g., balanced protein, carbs, and fat for most meals).
- - **Variety:** Introduce variety by suggesting different food options within a meal category while still adhering to preferences and goals.
```js
function suggestMeal(filteredFoods, goals) {
const meal = [];
let remainingCalories = goals.calories; // Track remaining calories for balanced meal
// Iterate through food categories (protein, carbs, fat)
for (const category of ['protein', 'carbs', 'fat']) {
const categoryFoods = filteredFoods.filter((food) => food.category === category);
// Select a food prioritizing lower calorie options while considering variety
const selectedFood = prioritizeFoods(categoryFoods, goals)[0];
if (selectedFood && selectedFood.calories <= remainingCalories) {
meal.push(selectedFood);
remainingCalories -= selectedFood.calories;
}
}
return meal;
}
```
This function suggests a balanced meal with variety, considering remaining calorie budget from user goals.
**Code Snippet (Simplified Example):**
```javascript
function suggestMeals(userPreferences, goals, foods) {
// Filter foods based on user preferences (restrictions, allergies)
const filteredFoods = foods.filter((food) => {
// Implement logic to check if food aligns with user preferences
});
// Prioritize based on goals (e.g., calorie deficit)
filteredFoods.sort((food1, food2) => {
// Implement logic to compare foods based on goal criteria (e.g., calorie content)
});
// Suggest meals with balanced macronutrients and variety
const suggestedMeals = [];
for (let i = 0; i < 3; i++) { // Suggest 3 meals for example
const meal = [];
// Implement logic to select and add foods to the meal while considering variety
suggestedMeals.push(meal);
}
return suggestedMeals;
}
// Example usage:
const userPreferences = { vegan: true };
const goals = { calorieTarget: 1800 };
const foods = yourStrapiFoodData; // Replace with actual food data
const suggestedMeals = suggestMeals(userPreferences, goals, foods);
console.log('Suggested meals:', suggestedMeals);
```
* This is a simplified example. The actual logic for filtering, prioritizing, and suggesting meals will involve more complex calculations and considerations.
* Consider using libraries like Lodash for utility functions like filtering and sorting.
**4. User Interface Integration:**
* Present the suggested meals within your app's interface, allowing users to easily view and potentially customize them based on their preferences.
* Provide options for users to provide feedback on the suggestions, further refining the engine over time.
**5. Machine Learning (Optional):**
* For a more advanced approach, explore integrating machine learning techniques like collaborative filtering to personalize meal suggestions based on user behavior and historical data.
**Note:**
* Implementing a robust meal suggestion engine requires careful consideration of user preferences, goal alignment, and dietary balance.
* Start with a basic approach and gradually improve the recommendation logic based on user feedback and potential machine learning integration.
## Grocery List Generation Based on Planned Meals
Here's how to implement grocery list generation based on planned meals in your nutrition app:
**1. Data Integration:**
* Ensure you have access to:
* Planned meals data, including the list of ingredients for each meal.
* Food data containing information like quantity units (e.g., grams, cups).
**2. Logic for Grocery List Generation:**
```javascript
function generateGroceryList(plannedMeals, foods) {
const groceryList = {}; // Map to store ingredients and their quantities
plannedMeals.forEach((meal) => {
meal.ingredients.forEach((ingredient) => {
const existingItem = groceryList[ingredient.foodId];
const foodData = foods.find((food) => food.id === ingredient.foodId);
// Handle existing item in the list
if (existingItem) {
existingItem.quantity += ingredient.quantity;
} else {
// Add new item to the list with appropriate quantity and unit
groceryList[ingredient.foodId] = {
name: foodData.name,
quantity: ingredient.quantity,
unit: foodData.unit, // Assuming unit information exists in food data
};
}
});
});
return Object.values(groceryList); // Convert map to an array for easier display
}
```
* This function iterates through planned meals and their ingredients.
* It checks the grocery list for existing entries based on the food ID.
* If an item already exists, it adds the new quantity to the existing one.
* If a new item is encountered, it adds it to the list with details like name, quantity, and unit.
**3. User Interface Integration:**
* Display the generated grocery list in a dedicated section of your app.
* Allow users to potentially:
* Edit quantities for ingredients on the list.
* Mark items as purchased or collected.
* Export the list (e.g., print or share as a text file).
```javascript
// components/GroceryList.js
import React from 'react';
function GroceryList({ groceryList }) {
return (
<div>
<h2>Grocery List</h2>
<ul>
{groceryList.map((item) => (
<li key={item.name}>
{item.quantity} {item.unit} - {item.name}
</li>
))}
</ul>
</div>
);
}
export default GroceryList;
```
* This component displays the generated grocery list with item details (quantity, unit, name).
**4. Additional Considerations:**
* You might need to handle cases where a food item is used in multiple planned meals, ensuring accurate quantity accumulation in the grocery list.
* Consider allowing users to set preferred grocery stores and potentially integrate with grocery delivery services (optional, advanced feature).
## Progress Tracking and Reports for Your Nutrition App
**1. Data Collection and Storage:**
* Track user data relevant to progress, such as:
* Daily calorie intake and macronutrient (carbs, protein, fat) breakdown.
* Weight measurements (if user chooses to track it).
* Notes or reflections on meals and overall progress.
* Utilize a database like Strapi to store this data securely.
**2. User Interface for Tracking:**
* Provide user-friendly interfaces for entering and reviewing progress data:
* Allow users to log daily meals and their corresponding calorie/nutrient information.
* Offer options for weight entries, potentially with charts visualizing weight trends over time.
* Include a dedicated section for notes or reflections.
```javascript
// components/DailyProgress.js
import React, { useState } from 'react';
function DailyProgress({ date, onMealAdd, onWeightUpdate, onNoteSubmit }) {
const [calories, setCalories] = useState(0);
const [macros, setMacros] = useState({ carbs: 0, protein: 0, fat: 0 });
const [weight, setWeight] = useState(null);
const [note, setNote] = useState('');
// Handle user input for calories, macros, weight, and notes
return (
<div>
<h2>Daily Progress - {date}</h2>
<form onSubmit={(e) => onMealAdd(e, calories, macros)}> // Submit form to add meal data
{/* Input fields for calories and macros */}
</form>
<form onSubmit={(e) => onWeightUpdate(e, weight)}> {/* Submit form to update weight */}
<label htmlFor="weight">Weight:</label>
<input type="number" id="weight" value={weight} onChange={(e) => setWeight(e.target.value)} />
</form>
<textarea value={note} onChange={(e) => setNote(e.target.value)} /> {/* Text area for notes */}
<button onClick={() => onNoteSubmit(note)}>Add Note</button>
</div>
);
}
export default DailyProgress;
```
**3. Progress Reports:**
* Generate reports summarizing user progress over a chosen period (week, month, etc.).
* Utilize charts and graphs to visually represent trends in calorie intake, macronutrient distribution, and potentially weight changes (if tracked).
* Allow users to compare progress against their initial goals or targets.
```javascript
// components/ProgressReport.js
import React, { useState, useEffect } from 'react';
import { Chart } from 'chart.js'; // Assuming using Chart.js library for charts
function ProgressReport({ startDate, endDate, progressData }) {
const [chartData, setChartData] = useState(null);
useEffect(() => {
// Prepare chart data based on progressData (e.g., daily calorie intake)
const labels = progressData.map((day) => day.date);
const calorieData = progressData.map((day) => day.calories);
setChartData({
labels,
datasets: [
{
label: 'Daily Calorie Intake',
data: calorieData,
backgroundColor: 'rgba(255, 99, 132, 0.2)',
borderColor: 'rgba(255, 99, 132, 1)',
borderWidth: 1,
},
],
});
}, [progressData]);
return (
<div>
<h2>Progress Report ({startDate} - {endDate})</h2>
{chartData && <Chart type="line" data={chartData} />} {/* Display calorie intake chart */}
{/* Display additional charts or metrics based on progressData */}
</div>
);
}
export default ProgressReport;
```
**4. Additional Considerations:**
* Allow users to customize the data displayed in reports (e.g., filter by date range, specific nutrients).
* Integrate with wearable devices or fitness trackers to import weight and activity data (optional, advanced feature).
* Provide motivational messages or insights based on user progress.
## Deployment Strategies for Your Nutrition App:
Here's a breakdown of deployment options and considerations for your Next.js nutrition app:
**1. Choosing a Deployment Platform:**
* **Vercel:** A popular platform offering seamless deployment for Next.js applications. It integrates well with Git providers like GitHub and provides features like serverless functions and custom domains.
* **Netlify:** Another popular option with a user-friendly interface and features like continuous deployment, environment variables, and global CDN (Content Delivery Network) for fast delivery.
**2. Configuring Environment Variables:**
Both Vercel and Netlify allow you to manage environment variables securely. These variables store sensitive information like API keys or database connection strings that shouldn't be exposed in your code.
**Steps (using Vercel as an example):**
1. Go to your Vercel project settings.
2. Navigate to the "Environment" section.
3. Add key-value pairs for your environment variables (e.g., `NEXT_PUBLIC_STRAPI_URL` for your Strapi API endpoint, `USDA_API_KEY` for your USDA API key).
4. Access these variables in your Next.js app using `process.env.VARIABLE_NAME`.
**3. Setting Up CI/CD Pipeline (Optional):**
Continuous Integration/Continuous Delivery (CI/CD) automates the process of building, testing, and deploying your application. This streamlines development and reduces the risk of errors.
**Steps (using Vercel with GitHub integration):**
1. Connect your Vercel project to your GitHub repository.
2. Configure your `vercel.json` file to specify build commands and environment variables.
3. Vercel will automatically trigger a deployment whenever you push code changes to your main branch in GitHub.
**Additional Considerations:**
* **Serverless Functions (Optional):** Both Vercel and Netlify offer serverless functions that can be used for backend logic without managing servers. Consider using them for functionalities that don't require a full-fledged backend (e.g., user authentication).
* **Monitoring and Logging:** Implement monitoring and logging solutions to track your application's performance and identify any issues after deployment.
**Note:**
* Choose a deployment platform that aligns with your project requirements and preferences.
* Securely manage environment variables to protect sensitive information.
* Consider implementing CI/CD for a smoother development workflow.
**Conclusion**
Building a comprehensive nutrition app requires integrating various functionalities. Users can manage preferences and goals, receive meal suggestions based on their needs, generate grocery lists from planned meals, and track progress with insightful reports. By incorporating UI frameworks, external food databases, and optional features like CI/CD and serverless functions, you can create a user-friendly and effective app.
Choose a reliable deployment platform and prioritize user feedback to make your app the ultimate companion for a healthy lifestyle journey. | joanayebola |
1,913,032 | What is a Proxy Server and How Does it Work? | What is a Proxy Server and How Does it Work? A proxy server acts as an intermediary... | 0 | 2024-07-05T17:04:16 | https://dev.to/sh20raj/what-is-a-proxy-server-and-how-does-it-work-155g | proxy, proxyservers, servers, security | ### What is a Proxy Server and How Does it Work?
A proxy server acts as an intermediary between a user’s computer and the internet. When you use a proxy server, your internet traffic is routed through the proxy, which makes requests on your behalf. This can enhance privacy, improve security, and allow you to bypass content restrictions.
![](https://upload.wikimedia.org/wikipedia/commons/b/bb/Proxy_concept_en.svg)
#### Types of Proxy Servers
1. **Forward Proxy**
- This is the most common type of proxy. It sits between a client and the wider internet, forwarding client requests to the internet. It can cache data, filter content, and manage bandwidth usage.
2. **Reverse Proxy**
- Reverse proxies handle requests from the internet to internal servers. They are often used for load balancing, caching static content, and enhancing security by hiding the details of backend servers from the client.
3. **Web Proxy**
- Specifically designed for web traffic, web proxies handle HTTP and HTTPS requests. They allow users to browse the internet anonymously by masking their IP addresses.
4. **Anonymous Proxy**
- These proxies hide the user’s IP address from the destination server, offering a basic level of anonymity. They are commonly used to prevent tracking and protect user privacy.
5. **High Anonymity Proxy**
- High anonymity proxies provide a higher level of privacy by frequently changing IP addresses and ensuring that the proxy server itself is not identifiable as a proxy.
6. **Transparent Proxy**
- Transparent proxies do not modify the request or hide the user’s IP address. They are often used in organizational settings for content filtering and caching.
7. **Distorting Proxy**
- These proxies provide a fake IP address to the destination server, offering an additional layer of anonymity by hiding the user’s true IP address.
8. **Rotating Proxy**
- Rotating proxies change the IP address for each request, providing higher levels of anonymity and reducing the risk of IP bans.
#### How Proxy Servers Work
1. **Client Request**
- The client (your computer) sends a request to access a resource on the internet. This request is directed to the proxy server instead of directly to the destination server.
2. **Proxy Server**
- The proxy server receives the client’s request, processes it, and then forwards it to the destination server. The destination server sees the request as coming from the proxy, not the client.
3. **Fetching the Resource**
- The destination server processes the request and sends the response back to the proxy server. The proxy server then relays this response to the client.
4. **Caching**
- Proxy servers often cache the responses from destination servers. If another client requests the same resource, the proxy can deliver it directly from its cache, improving load times and reducing bandwidth usage.
#### Benefits of Using Proxy Servers
1. **Privacy and Anonymity**
- By masking your IP address, proxies can help protect your identity online and prevent tracking by websites.
2. **Security**
- Proxies can filter malicious content, block access to harmful sites, and protect against some types of cyber attacks.
3. **Access Control**
- Organizations use proxies to restrict access to certain websites, ensuring that employees or users adhere to usage policies.
4. **Bandwidth Savings and Speed**
- Caching frequently accessed content reduces bandwidth usage and improves loading times for users.
5. **Bypass Restrictions**
- Proxies can help users bypass geographical restrictions and access content that might be blocked in their region.
#### Risks and Considerations
1. **Lack of Encryption**
- Most proxies do not encrypt your traffic, leaving it vulnerable to interception by hackers or third parties.
2. **Trust Issues**
- Users must trust the proxy server provider as it can potentially monitor and log user activities.
3. **Performance**
- Proxies can introduce latency and slow down internet connections, especially if they are overloaded or poorly configured.
4. **Legal and Ethical Concerns**
- Using proxies to bypass restrictions or access blocked content can sometimes violate terms of service or local laws.
Proxy servers are versatile tools that can enhance your online experience by providing privacy, security, and access control. However, it’s essential to choose the right type of proxy and understand its limitations and risks.
![](https://www.seobility.net/en/wiki/images/8/8a/Proxy-Server.png)
For more detailed information on proxy servers, you can visit resources such as [Kinsta](https://kinsta.com), [Wikipedia](https://en.wikipedia.org), [TechRadar](https://www.techradar.com), and [AVG](https://www.avg.com). | sh20raj |
1,913,031 | What is HTTPS and How Does it Work? | What is HTTPS and How Does it Work? Introduction HTTPS, which stands for... | 0 | 2024-07-05T16:56:40 | https://dev.to/sh20raj/what-is-https-and-how-does-it-work-3nj1 | webdev, https, ssl, javascript | ## What is HTTPS and How Does it Work?
### Introduction
HTTPS, which stands for Hypertext Transfer Protocol Secure, is an advanced version of the standard HTTP protocol. It adds a layer of security by encrypting data transmitted between a user's browser and a web server. This encryption ensures that sensitive information remains confidential and protected from eavesdropping and tampering.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kvo5rr6ex8bf66tfsgtw.png)
### How HTTPS Works
HTTPS utilizes two primary components: the HTTPS protocol and the Transport Layer Security (TLS) protocol.
#### Encryption and Authentication
1. **TLS Encryption**: HTTPS encrypts data using TLS, formerly known as SSL (Secure Sockets Layer). TLS employs asymmetric encryption, which uses a pair of keys: a public key to encrypt data and a private key to decrypt it. This ensures that even if data is intercepted, it cannot be read without the private key.
2. **Digital Certificates**: Websites using HTTPS have a digital certificate issued by a Certificate Authority (CA). This certificate verifies the website's identity, ensuring users that they are communicating with the legitimate site and not an imposter.
#### The HTTPS Process
1. **SSL/TLS Handshake**: When a user connects to an HTTPS-secured website, their browser and the server perform a handshake. During this process, the server presents its digital certificate to the browser.
2. **Certificate Verification**: The browser checks the certificate against a list of trusted CAs. If the certificate is valid, the browser and server establish a secure, encrypted connection.
3. **Data Transmission**: Once the secure connection is established, data can be transmitted safely between the browser and the server. This encrypted data cannot be intercepted or altered by third parties.
### Benefits of HTTPS
1. **Enhanced Security**: HTTPS protects against various attacks, including man-in-the-middle attacks, where an attacker intercepts and possibly alters the communication between two parties.
2. **Data Integrity**: HTTPS ensures that data transferred between the browser and the server is not tampered with during transmission.
3. **Privacy**: It keeps user data confidential, protecting sensitive information like login credentials, credit card details, and personal information.
4. **Trust and SEO**: Websites using HTTPS are often marked with a padlock icon in the browser's address bar, signaling to users that the site is secure. This fosters trust and can positively impact user behavior. Additionally, search engines like Google use HTTPS as a ranking signal, potentially improving a site's visibility in search results.
### Why HTTPS is Important
Without HTTPS, data transmitted over the internet is sent in plain text, making it vulnerable to interception and manipulation. HTTPS encrypts this data, providing a secure communication channel that protects against eavesdropping and data breaches. It is crucial for protecting user privacy, maintaining data integrity, and securing online transactions.
### Conclusion
HTTPS has become the standard for secure internet communication, providing essential protection for users and their data. By ensuring encrypted and authenticated data transmission, HTTPS plays a vital role in securing the modern web.
### Images and Visuals
Including images that illustrate the SSL/TLS handshake process, the appearance of the HTTPS padlock in browsers, and examples of encrypted versus plain text data can help visualize the concepts discussed in this article.
![Image from seobility.net](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ur9wh9v3h3ccckug97ih.png)
### References
1. Cloudflare: [What is HTTPS?](https://www.cloudflare.com/learning/ssl/what-is-https/)
2. SEMrush: [What Is HTTPS & How Does It Work?](https://www.semrush.com/blog/what-is-https/)
3. Wikipedia: [HTTPS](https://en.wikipedia.org/wiki/HTTPS)
4. FreeCodeCamp: [What is HTTPS?](https://www.freecodecamp.org/news/what-is-https/)
5. SSL Store: [How Does HTTPS Work?](https://www.thesslstore.com/blog/how-does-https-work/) | sh20raj |
1,913,030 | HydePHP Version v1.7 Released | Overview HydePHP v1.7 is now available for download! HydePHP v1.7 introduces several... | 0 | 2024-07-05T16:54:07 | https://hydephp.com/posts/hydephp-version-1-7-released | hydephp, php, opensource, news |
## Overview
**[HydePHP](https://hydephp.com?ref=dev.to) v1.7 is now available for download!**
HydePHP v1.7 introduces several quality of life improvements, including supporting HTML comments for Markdown code block labels, new customizable theme toggle options, and improved site URL handling. This release improves developer flexibility and developer experience with smarter configuration management and more predictable navigation group handling.
## New Features and Improvements
### 1. HTML Comments for Markdown Code Block Filepath Labels
One of the standout features in v1.7 is the added support for using HTML comments to create Markdown code block filepath labels. This improvement allows for more descriptive and organized code snippets in your documentation.
### 2. Customizable Theme Toggle
We've introduced a new config option that allows you to disable the theme toggle buttons. This feature enables your site to automatically use browser settings for theme preferences, providing a more seamless experience for your visitors.
### 3. Improved `serve` Command
The `serve` command now allows you to specify which path to open when using the `--open` option. This addition gives you more control over your development workflow.
### 4. JSON Output for `route:list` Command
For those who love working with structured data, we've added a `--format=json` option to the `route:list` command. This feature makes it easier to integrate route information into your tooling and scripts.
### 5. Improved Navigation Group Handling
We've addressed an issue with navigation group behavior. Now, when a navigation group is set in front matter, it will be used regardless of the subdirectory configuration. This change provides more predictable and flexible navigation structuring.
### 6. Smarter Site URL Handling
Several improvements have been made to how HydePHP handles site URLs:
- The `Hyde::hasSiteUrl()` method now returns false if the site URL is set to localhost.
- `Hyde::url()` will return a relative URL instead of throwing an exception when a path is supplied, even if the site URL is not set.
- These changes reduce the chance of default `localhost` values appearing in production environments.
### 7. Smarter Configuration Management
Setting a site name in the YAML config file now influences all configuration values where it's used, unless already set. This change streamlines the process of customizing your site's identity across various components.
## Deprecations and Removals
- The global `unslash()` function has been deprecated in favor of the namespaced `\Hyde\unslash()` function.
- The `BaseUrlNotSetException` class has been deprecated.
- The Git version is no longer displayed in the debug screen and dashboard.
## Upgrade Guide
To upgrade to this version, simply run the following command:
```bash
composer require hyde/framework:^1.7
```
You may first want to read the [upgrade guide](https://hydephp.com/docs/1.x/updating-hyde) documentation.
## Conclusion
HydePHP v1.7 brings a wealth of improvements that enhance developer experience, site customization, and overall flexibility. Whether you're building a personal blog or a complex documentation site, these new features and enhancements will help you create better static sites with less effort.
We encourage all users to upgrade to this latest version to take advantage of these new features and improvements. As always, we welcome your feedback and contributions to make HydePHP even better.
We hope you enjoy this release, and please report any issues you find at GitHub, https://github.com/hydephp/hyde,
and share your thoughts on Twitter/X, just use the hashtag [#HydePHP](https://twitter.com/search?q=%23HydePHP),
and tag us at [@HydeFramework](https://twitter.com/HydeFramework). | codewithcaen |
1,913,028 | My HNG Intership Journery | Embarking on a journey into mobile development is both thrilling and challenging. As a developer, one... | 0 | 2024-07-05T16:50:24 | https://dev.to/ajuwonlo_04/my-hng-intership-journery-4g90 | flutter, dart, learning | Embarking on a journey into mobile development is both thrilling and challenging. As a developer, one of the critical skills you need is the ability to articulate your value and present your skills effectively. Today, I'll walk you through some of the most popular mobile development platforms and common software architecture patterns, shedding light on their advantages and disadvantages. But first, let me share a bit about myself and the exciting path I'm about to take with the HNG Internship.
A Glimpse into Mobile Development Platforms
Mobile development platforms are the foundation upon which we build applications. Here are the most prominent ones:
1. iOS (Swift)
Pros:
Performance: Swift is designed to be fast and efficient.
User Experience: iOS apps are known for their seamless and intuitive user interfaces.
Ecosystem: The Apple ecosystem provides excellent integration across devices.
Cons:
Cost: Development and deployment can be expensive due to the cost of Apple devices and developer accounts.
Closed Source: Limited flexibility compared to open-source platforms.
2. Android (Kotlin)
Pros:
Open Source: More flexibility and customization.
Market Share: Android holds a significant share of the global market, offering a large user base.
Diverse Devices: Wide range of devices from various manufacturers.
Cons:
Fragmentation: The variety of devices can lead to compatibility issues.
Security: More susceptible to malware compared to iOS.
3. Cross-Platform (React Native, Flutter)
Pros:
Single Codebase: Write once, run anywhere.
Cost-Efficient: Reduced development time and cost.
Community Support: Large, active communities.
Cons:
Performance: May not match the performance of native apps.
Complexity: Can be more challenging to debug and optimize.
Common Software Architecture Patterns
Choosing the right architecture is crucial for the maintainability and scalability of your mobile app. Let's explore some popular patterns:
1. MVC (Model-View-Controller)
Pros:
Separation of Concerns: Clear division between data, UI, and business logic.
Testability: Easier to write unit tests.
Cons:
Complexity: Can become overly complex for larger applications.
Tight Coupling: Changes in one component can affect others.
2. MVP (Model-View-Presenter)
Pros:
Testability: Presenter can be easily tested.
Flexibility: View and model can be independently developed.
Cons:
Boilerplate Code: Often requires more boilerplate code.
Maintenance: Can be harder to maintain as the project grows.
3. MVVM (Model-View-ViewModel)
Pros:
Data Binding: Simplifies the connection between the UI and the underlying data.
Modularity: Better separation of concerns than MVC.
Cons:
Learning Curve: Can be more challenging to learn and implement.
Performance: Overuse of data binding can lead to performance issues.
**My Journey with HNG Internship**
How I got onboarded into HNG;
A friend of mine sent me the the internship form link, because I told him I really wanted to improve my mobile development skills.
I chose the HNG Internship because of its hands-on approach and the chance to work on real projects. The experience and mentorship offered here are invaluable, and I am excited to contribute and grow within this vibrant community.
Why the HNG Internship?
The HNG Internship stands out for several reasons:
Practical Experience: Work on real-world projects that matter.
Mentorship: Learn from seasoned professionals who guide you through every step.
Networking: Connect with peers and industry leaders.
For more information about the HNG Internship, check out HNG Internship and explore their premium programs to elevate your career further.
**Conclusion**
Mobile development is a dynamic and ever-evolving field. Understanding the various platforms and architecture patterns is crucial for building robust, efficient, and scalable applications. As I embark on my journey with the HNG Internship, I look forward to honing my skills and contributing to the tech community. Stay tuned for updates on my progress and insights from this incredible experience!
Stay connected and follow my journey through the HNG Internship. Let's learn, grow, and make an impact together! | ajuwonlo_04 |
1,913,027 | Umami: An Open-Source Web Analytics Solution | Umami: An Open-Source Web Analytics Solution In today's digital landscape, understanding... | 0 | 2024-07-05T16:49:17 | https://dev.to/sh20raj/umami-an-open-source-web-analytics-solution-4010 | # Umami: An Open-Source Web Analytics Solution
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vgv9f7o39l5t0ivqiiox.png)
In today's digital landscape, understanding user behavior is crucial for improving user experience and making data-driven decisions. Umami is an open-source web analytics solution designed to provide comprehensive insights into your website's performance while prioritizing user privacy. In this article, we'll delve into what Umami is, its key features, various use cases, and a detailed guide on how to get started with it.
{% github https://github.com/umami-software/umami
%}
## Umami
**Umami** is a simple, fast, privacy-focused alternative to Google Analytics. It offers essential metrics for understanding your website's traffic and user interactions without the complexity and bloat of traditional analytics platforms.
### Key Features
1. **Privacy-Focused**: Umami respects user privacy by not using cookies and not collecting any personally identifiable information (PII). This ensures compliance with privacy regulations like GDPR and CCPA.
2. **Open Source**: Umami is free to use and can be customized to suit your needs. The source code is available on [GitHub](https://github.com/umami-software/umami).
3. **Simple Setup**: Designed for ease of use, Umami can be deployed on your server or using managed hosting services.
4. **Real-Time Data**: Get real-time analytics, allowing you to see visitor data as it happens.
5. **Customizable Dashboard**: A clean, customizable dashboard lets you control which metrics you want to see.
6. **Multi-Site Support**: Monitor multiple websites from a single Umami instance, ideal for agencies or users with multiple projects.
7. **Event Tracking**: Supports custom event tracking, enabling you to monitor specific actions on your website, such as button clicks and form submissions.
8. **Responsive Design**: Access your analytics on any device with a fully responsive dashboard.
## Use Cases
### 1. Website Performance Monitoring
Umami provides essential metrics like page views, unique visitors, bounce rates, and session durations, helping you understand how users interact with your website and identify areas for improvement.
### 2. Marketing Campaign Tracking
With Umami's event tracking, you can monitor the performance of marketing campaigns by tracking specific user actions such as link clicks, form submissions, and more. This helps you measure the effectiveness of your campaigns and optimize them for better results.
### 3. Compliance with Privacy Regulations
If your business operates in regions with strict privacy regulations (e.g., GDPR, CCPA), Umami's privacy-focused approach ensures that you can collect analytics data without compromising user privacy or violating regulations.
### 4. Managing Multiple Websites
For digital agencies or individuals managing multiple websites, Umami's multi-site support allows you to monitor all your sites from a single dashboard, simplifying analytics management and reporting.
## Getting Started with Umami
A detailed getting started guide can be found at [umami.is/docs](https://umami.is/docs).
### Installing from Source
#### Requirements
- A server with Node.js version 16.13 or newer.
- A database. Umami supports MySQL (minimum v8.0) and PostgreSQL (minimum v12.14) databases.
#### Install Yarn
```bash
npm install -g yarn
```
#### Get the Source Code and Install Packages
```bash
git clone https://github.com/umami-software/umami.git
cd umami
yarn install
```
#### Configure Umami
Create an `.env` file with the following:
```plaintext
DATABASE_URL=connection-url
```
The connection URL format:
- PostgreSQL: `postgresql://username:mypassword@localhost:5432/mydb`
- MySQL: `mysql://username:mypassword@localhost:3306/mydb`
#### Build the Application
```bash
yarn build
```
The build step will create tables in your database if you are installing for the first time. It will also create a login user with username `admin` and password `umami`.
#### Start the Application
```bash
yarn start
```
By default, this will launch the application on `http://localhost:3000`. You will need to either proxy requests from your web server or change the port to serve the application directly.
### Installing with Docker
To build the Umami container and start up a Postgres database, run:
```bash
docker compose up -d
```
Alternatively, to pull just the Umami Docker image with PostgreSQL support:
```bash
docker pull docker.umami.is/umami-software/umami:postgresql-latest
```
Or with MySQL support:
```bash
docker pull docker.umami.is/umami-software/umami:mysql-latest
```
### Getting Updates
To get the latest features, simply do a pull, install any new dependencies, and rebuild:
```bash
git pull
yarn install
yarn build
```
To update the Docker image, simply pull the new images and rebuild:
```bash
docker compose pull
docker compose up --force-recreate
```
## Support
For additional support, you can visit:
- [GitHub](https://github.com/umami-software/umami)
- [Twitter](https://twitter.com/umamianalytics)
- [LinkedIn](https://www.linkedin.com/company/umami-software/)
- [Discord](https://discord.gg/umami)
## Conclusion
Umami is a powerful, privacy-focused web analytics solution that offers a clean, simple interface for understanding your website traffic. Its open-source nature and ease of deployment make it an excellent choice for developers and businesses looking to gain insights without compromising user privacy. Whether you're a small business owner, a web developer, or a digital marketer, Umami provides the tools you need to make data-driven decisions and improve your online presence.
For more information and to get started with Umami, visit their [GitHub page](https://github.com/umami-software/umami).
---
By implementing Umami, you can enjoy comprehensive web analytics while ensuring your users' privacy and maintaining compliance with global data protection regulations. Give it a try and see how it can transform your understanding of user behavior on your website. | sh20raj |
|
1,913,025 | What is SSL | How does SSL work | Understanding SSL: The Backbone of Internet Security Introduction In the digital age,... | 0 | 2024-07-05T16:47:23 | https://dev.to/sh20raj/what-is-ssl-how-does-ssl-work-30eb | ssl, webdev, security, aws | ### Understanding SSL: The Backbone of Internet Security
**Introduction**
In the digital age, secure communication is paramount. Secure Sockets Layer (SSL), and its successor Transport Layer Security (TLS), are cryptographic protocols designed to provide security over a computer network. Although SSL has been largely replaced by TLS, the term SSL is still widely used to refer to both protocols.
![Image by seobility.net](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gloxl0ovim19crq8xgg7.png)
**What is SSL?**
SSL stands for Secure Sockets Layer, a protocol developed to ensure secure, encrypted communications between a client (like a web browser) and a server (such as a web server). SSL was designed to prevent eavesdropping, tampering, and message forgery over the internet.
### How SSL/TLS Works
The primary function of SSL/TLS is to encrypt data being transmitted between two systems, ensuring that any data transferred remains private and integral. Here’s a simplified breakdown of the process:
![Image From CloudFlare](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1sxsbudod567qlnz1z1.png)
1. **Handshake Process**:
- **Client Hello**: The client sends a request to the server, including supported encryption methods and a randomly generated data string.
- **Server Hello**: The server responds with its own random data string, chosen encryption method, and its SSL certificate.
- **Certificate Verification**: The client verifies the server's SSL certificate with a trusted Certificate Authority (CA).
- **Session Keys**: The client generates a session key, encrypts it with the server's public key, and sends it to the server. The server decrypts the session key with its private key.
- **Encrypted Session**: Both parties use the session key for symmetric encryption of data, ensuring secure communication for the duration of the session【7†source】【9†source】.
### Types of SSL/TLS Certificates
SSL/TLS certificates authenticate the identity of the website and encrypt the data. There are several types of certificates based on the level of validation:
1. **Domain Validated (DV) Certificates**: Basic level, verifying the domain owner.
2. **Organization Validated (OV) Certificates**: Involves checking the organization behind the domain.
3. **Extended Validation (EV) Certificates**: Provides the highest level of security and assurance by validating the organization's identity extensively【8†source】.
### Benefits of SSL/TLS
- **Encryption**: SSL/TLS ensures that data transmitted between a client and a server is encrypted, protecting it from interception and tampering.
- **Authentication**: Validates that the website you are communicating with is indeed the intended site.
- **Data Integrity**: Ensures that data cannot be altered during transfer without being detected.
- **SEO Benefits**: Search engines like Google give preference to HTTPS websites, potentially boosting rankings【9†source】.
### SSL/TLS in Use: HTTPS
The visible indicator of SSL/TLS in action is HTTPS (Hyper Text Transfer Protocol Secure). When you see "https://" in your browser’s address bar along with a padlock icon, it means the site is secured by SSL/TLS, ensuring encrypted and authenticated communication between your browser and the server【7†source】【8†source】.
### Conclusion
SSL/TLS are critical components of internet security, safeguarding data from interception and tampering. By encrypting communications and verifying identities, SSL/TLS helps maintain privacy and trust online. Whether you run a blog, an e-commerce site, or any other online service, implementing SSL/TLS is essential for protecting your users and ensuring secure communication.
![SSL Diagrame from Wikimedia](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3n0e86on4wb5zykvk2ct.png)
(Illustration of how SSL/TLS secures a connection)
For further reading and to get SSL/TLS certificates, you can visit sources like [Cloudflare](https://www.cloudflare.com/learning/ssl/what-is-ssl/), [DigiCert](https://www.digicert.com/what-is-ssl), and [SSL.com](https://www.ssl.com/article/what-is-ssl-tls-an-in-depth-guide/).
---
You May Like This Video :-
{% youtube https://youtu.be/OjAwwGV38Ms?si=V_sDk5-TXzvvjweg %} | sh20raj |
1,913,024 | Vaadin 24 download file via button click | In vaadin 24, to make a button clickable to download file or other resources, just wrap the Button... | 0 | 2024-07-05T16:44:40 | https://dev.to/alkarid/vaadin-24-download-file-via-button-click-11oo | vaadin, springboot, java | In vaadin 24, to make a button clickable to download file or other resources, just wrap the Button component into an Anchor component.
There is no need to use an extension or external pluggin !!
```
Anchor a = new Anchor();
Button btn = new Button();
a.add(btn);
```
And that's all, pretty easy. | alkarid |
1,913,023 | Combining Bootstrap with SASS for Efficient Web Development | In this article, I will analyze my decision to combine Bootstrap, the most popular CSS framework for... | 0 | 2024-07-05T16:43:19 | https://dev.to/georgiosdrivas/combining-bootstrap-with-sass-for-efficient-web-development-4n6h | sass, webdev, css, design | In this article, I will analyze my decision to combine Bootstrap, the most popular CSS framework for developing responsive and mobile-first websites, with a CSS preprocessor, specifically SASS, for my personal project.
Bootstrap simplifies the process of writing CSS by allowing you to add classes to elements, while SASS enhances CSS with additional features, making your code more efficient and maintainable. Combining these tools can streamline your workflow and improve the overall quality of your web projects.
## Installing Bootstrap
There are two ways to install Bootstrap in your project.
**Include from CDN**
Using a Content Delivery Network (CDN) is quick and easy. To include Bootstrap via CDN, add the following lines of code to your HTML file:
```
<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css”rel=”nofollow” integrity=”sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm” crossorigin=”anonymous”>
```
And for JS place this line of code at the bottom of the body element( for what's included in this script, please read more [here](https://getbootstrap.com/docs/5.0/getting-started/introduction/)):
```
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
```
**Download files locally**
Another way of importing Bootstrap to HTML is to directly download the files locally to your HTML project folder. The files can be downloaded from the following links:
Bootstrap 4: [https://getbootstrap.com/docs/4.3/getting-started/download/](https://getbootstrap.com/docs/4.3/getting-started/download/)
Bootstrap 5: [https://v5.getbootstrap.com/docs/5.0/getting-started/download/](https://v5.getbootstrap.com/docs/5.0/getting-started/download/)
## SASS
SASS (Syntactically Awesome Style Sheets) is a CSS preprocessor that allows you to write more flexible and maintainable CSS. It adds features such as variables, nested rules, and mixins, which streamline your CSS and make it more powerful.
## Installing sass
One easy and simple way to install SASS in your project is by using a package manager. For this project, I used npm. To install SASS, run the following command:
```
npm install sass
```
## Why combine Sass and Bootstrap
Combining SASS and Bootstrap allows you to leverage the best of both worlds. Bootstrap helps you build your layout quickly with pre-designed components, while SASS enables you to write reusable and modular CSS for custom parts of your design. Although it might add some unused CSS to your project, the benefits of reusability and maintainability outweigh this downside.
## Benefits combining Sass with Bootstrap
- **Quick Layout Building**: Bootstrap’s predefined classes help you create responsive layouts rapidly.
- **Custom Styling**: SASS allows you to write custom styles that can be reused across your project, making your CSS more efficient.
- **Maintainability**: Using SASS features like variables and mixins can make your CSS easier to manage and update.
## Conclusion
In conclusion, combining Bootstrap with SASS can significantly enhance your web development workflow. Bootstrap’s ready-to-use components, paired with SASS’s powerful features, allow you to create efficient, maintainable, and scalable web designs. While it may introduce some unused CSS, the benefits of customizability and reusability make it a worthwhile approach. I encourage you to try this combination in your projects and experience the improved productivity and organization it brings. Feel free to check out my project and give it a star on [GitHub](https://github.com/GeorgiosDrivas/spring-boot-frontend). | georgiosdrivas |
1,913,020 | July 10: Developing Data-Centric AI Workshop | Learn how to author data-centric AI applications on July 10 at the virtual “Getting Started with... | 0 | 2024-07-05T16:39:27 | https://dev.to/voxel51/developing-data-centric-ai-workshop-50gh | computervision, ai, machinelearning, datascience | Learn how to author data-centric AI applications on July 10 at the virtual “Getting Started with FiftyOne Plugins” workshop - [register for the Zoom.](https://voxel51.com/computer-vision-events/developing-fiftyone-plugins-workshop-july-10/)
{% embed https://youtu.be/C7xLEr5eYtM %}
Join Daniel Gural for a free 90 min workshop, where he'll cover how to install, use, and build your own data-centric AI applications using [FiftyOne's plugin framework](https://voxel51.com/plugins/). For example:
- Curate your own AI art gallery with DALLE3, SDXL, and Latent Consistency Models
- Bring GPT4 Vision directly to your data
- Run OCR, semantic document searches, and make your textual data visual
- Create an open-vocabulary labeling interface
And so much more...
| jguerrero-voxel51 |
1,913,019 | Linux User Creation Bash Script | [As part of the HNG Internship program, we were tasked with creating a bash script named... | 0 | 2024-07-05T16:38:47 | https://dev.to/devdera/linux-user-creation-bash-script-1hfm | [As part of the HNG Internship program, we were tasked with creating a bash script named create_users.sh to automate the creation of new users and groups on a Linux system.
Checkout (https://hng.tech/internship) and (https://hng.tech/premium) for more information]
**Overview**
This script, create_users.sh, automates the creation of users and their associated groups, sets up their home directories, generates random passwords, and logs all actions. The script reads from a specified text file containing usernames and group names.
**Prerequisites**
The script must be run with root privileges.
Ensure the input file with usernames and groups is formatted correctly and exists.
**Script steps**
I created a file called Create_Users.sh
Using vim editor, I created a log file , password.txt file .
Ensure my script is run as root and set up specific instructions and permissions.
Below is the content of the script.
##!/bin/bash
# Create log file and secure password file with proper permissions
LOG_FILE="/var/secure/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
# Ensure the script is run as root
if [[ "$(id -u)" -ne 0 ]]; then
echo "This script must be run as root."
exit 1
fi
# Ensure the log file exists
touch "$LOG_FILE"
# Setup password file
if [[ ! -d "/var/secure" ]]; then
mkdir /var/secure
fi
if [[ ! -f "$PASSWORD_FILE" ]]; then
touch "$PASSWORD_FILE"
chmod 600 "$PASSWORD_FILE"
fi
# Check if the input file is provided
if [[ -z "$1" ]]; then
echo "Usage: bash create_users.sh <name-of-text-file>"
echo "$(date '+%Y-%m-%d %H:%M:%S') - ERROR: No input file provided." >> "$LOG_FILE"
exit 1
fi
# Read the input file line by line
while IFS=';' read -r username groups; do
# Skip empty lines
[[ -z "$username" ]] && continue
# Remove whitespace
username=$(echo "$username" | xargs)
groups=$(echo "$groups" | xargs)
# Create user if not exists
if ! id "$username" &>/dev/null; then
# Create the user with a home directory
useradd -m -s /bin/bash "$username"
if [[ $? -ne 0 ]]; then
echo "$(date '+%Y-%m-%d %H:%M:%S') - ERROR: Failed to create user $username." >> "$LOG_FILE"
continue
fi
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: User $username created." >> "$LOG_FILE"
# Generate a random password for the user
password=$(openssl rand -base64 12)
echo "$username:$password" | chpasswd
# Save the password to the secure password file
echo "$username,$password" >> "$PASSWORD_FILE"
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: Password for user $username generated and stored." >> "$LOG_FILE"
else
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: User $username already exists." >> "$LOG_FILE"
fi
# Create groups and add user to them
IFS=',' read -ra group_list <<< "$groups"
for group in "${group_list[@]}"; do
group=$(echo "$group" | xargs)
# Create group if not exists
if ! getent group "$group" >/dev/null; then
groupadd "$group"
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: Group $group created." >> "$LOG_FILE"
fi
# Add user to the group
usermod -a -G "$group" "$username"
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: User $username added to group $group." >> "$LOG_FILE"
done
# Set ownership and permissions for the home directory
chown -R "$username:$username" "/home/$username"
chmod 700 "/home/$username"
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: Home directory for user $username set up with appropriate permissions." >> "$LOG_FILE"
done < "$1"
echo "$(date '+%Y-%m-%d %H:%M:%S') - INFO: User creation script completed." >> "$LOG_FILE"
exit 0#
Next, I created an employees.txt file for the usernames and groups.
Granted permission to the Create_Users.sh file using chmod +x /home/kali/Desktop/HNG/Create_Users.sh (this is the file path) and sudo /home/kali/Desktop/HNG/Create_Users.sh /home/kali/Desktop/HNG/employees.txt.
**Verify Execution**
Input the following to verify execution
id John for user creation verification
Groups John to verify the groups John is in.
Cat /var/log/user_management.log to print log details.
Cat /car/secure/user_passwords.txt to print passwords.
**Learn More About HNG Internship**
The HNG Internship is a remote internship program designed to find and develop the most talented software developers. It offers a stimulating environment for interns to improve their skills and showcase their abilities through real-world tasks.
(https://hng.tech/internship)
[](https://hng.tech/hire)
| devdera |
|
1,913,021 | Refactoring 014 - Remove IF | The first instruction you learned should be the least you use TL;DR: Remove all your Accidental... | 15,550 | 2024-07-05T16:38:37 | https://maximilianocontieri.com/refactoring-014-remove-if | webdev, beginners, programming, java | *The first instruction you learned should be the least you use*
> TL;DR: Remove all your Accidental IF-sentences
# Problems Addressed
- Code Duplication
- Possible Typos and defects
# Related Code Smells
{% post https://dev.to/mcsee/code-smell-07-boolean-variables-3koj %}
{% post https://dev.to/mcsee/code-smell-36-switch-case-elseif-else-if-statements-h6c %}
{% post https://dev.to/mcsee/code-smell-133-hardcoded-if-conditions-53bi %}
{% post https://dev.to/mcsee/code-smell-156-implicit-else-1ko3 %}
{% post https://dev.to/mcsee/code-smell-119-stairs-code-1e2b %}
{% post https://dev.to/mcsee/code-smell-145-short-circuit-hack-4l4p %}
{% post https://dev.to/mcsee/code-smell-101-comparison-against-booleans-3aic %}
{% post https://dev.to/mcsee/code-smell-45-not-polymorphic-388g %}
# Steps
1. Find or Create a Polymorphic Hierarchy
2. Move the Body of Each IF to the Corresponding Abstraction
3. Name the Abstractions
4. Name the Method
5. Replace if Statements with Polymorphic Message Sends
# Sample Code
## Before
[Gist Url]: # (https://gist.github.com/mcsee/ffba17263f40053ed57698d0880b942c)
```java
public String handleMicrophoneState(String state) {
if (state.equals("off")) {
return "Microphone is off";
} else {
return "Microphone is on";
}
}
/* The constant representing the 'off' state is
duplicated throughout the code,
increasing the risk of typos and spelling mistakes.
The "else" condition doesn't explicitly check for the 'on' state;
it implicitly handles any state that is 'not off'.
This approach leads to repetition of the IF condition
wherever the state needs handling,
exposing internal representation and violating encapsulation.
The algorithm is not open for extension and closed for modification,
meaning that adding a new state
will require changes in multiple places in the code. */
```
## After
[Gist Url]: # (https://gist.github.com/mcsee/2c97cd57dc9e98c877e91fcb7ed3191c)
```java
// Step 1: Find or Create a Polymorphic Hierarchy
abstract class MicrophoneState { }
final class On extends MicrophoneState { }
final class Off extends MicrophoneState { }
// Step 2: Move the Body of Each IF to the Corresponding Abstraction
abstract class MicrophoneState {
public abstract String polymorphicMethodFromIf();
}
final class On extends MicrophoneState {
@Override
public String polymorphicMethodFromIf() {
return "Microphone is on";
}
}
final class Off extends MicrophoneState {
@Override
public String polymorphicMethodFromIf() {
return "Microphone is off";
}
}
// Step 3: Name the Abstractions
abstract class MicrophoneState {}
final class MicrophoneStateOn extends MicrophoneState {}
final class MicrophoneStateOff extends MicrophoneState {}
// Step 4: Name the Method
abstract class MicrophoneState {
public abstract String handle();
}
final class MicrophoneStateOn extends MicrophoneState {
@Override
String handle() {
return "Microphone is on";
}
}
final class MicrophoneStateOff extends MicrophoneState {
@Override
String handle() {
return "Microphone is off";
}
}
// Step 5: Replace if Statements with Polymorphic Message Sends
public String handleMicrophoneState(String state) {
Map<String, MicrophoneState> states = new HashMap<>();
states.put("muted", new Muted());
states.put("recording", new Recording());
states.put("idle", new Idle());
MicrophoneState microphoneState =
states.getOrDefault(state, new NullMicrophoneState());
return microphoneState.handle();
}
```
# Type
[X] Semi-Automatic
# Safety
Most steps are mechanic. This is a pretty safe refactoring
# Why is the code better?
The refactored code follows the open/closed principle and favors polymorphism instead of using IFs
# Limitations
You should only apply it to [**Accidental IFs**](https://dev.to/mcsee/how-to-get-rid-of-annoying-ifs-forever-1jfg).
Leave the business rules as **"domain ifs"** and don't apply this refactoring
# Tags
- IFs
# Related Refactorings
{% post https://dev.to/mcsee/refactoring-013-remove-repeated-code-4npi %}
# See also
{% post https://dev.to/mcsee/how-to-get-rid-of-annoying-ifs-forever-1jfg %}
# Credits
Image by <a href="https://pixabay.com/users/renuagra-5667962/">Renuagra</a> on <a href="https://pixabay.com/">Pixabay</a>
* * *
This article is part of the Refactoring Series.
{% post https://dev.to/mcsee/how-to-improve-your-code-with-easy-refactorings-2ij6 %} | mcsee |
1,913,017 | Formula Plumbing Services | Formula Plumbing Services is family-owned and operated. We specialize in service plumbing. Our owner... | 0 | 2024-07-05T16:33:40 | https://dev.to/formula_plumbing_802c4be8/formula-plumbing-services-25hh | Formula Plumbing Services is family-owned and operated. We specialize in service plumbing. Our owner Joe is a Master Plumbing Contractor and has been in the industry for 24+ years. Since our founding, we’ve worked with numerous clients throughout the Tampa Bay Area. We’ve had the privilege to build a reputable client base that consists of repeat and referral business. All work guaranteed 100% satisfaction! Great service begins and ends with experienced, reliable & friendly professionals along with great customers. So if you have a problem, our [**emergency plumber Harbor FL**](https://formulaplumbing.com/) can fix it.
| formula_plumbing_802c4be8 |
|
1,913,016 | JavaScript Today Blog | This blog delves into solving common JavaScript interview questions, making it a valuable tool for... | 0 | 2024-07-05T16:32:54 | https://dev.to/sandeep_kundekar_2b86f18f/javascript-today-blog-2758 | This blog delves into solving common JavaScript interview questions, making it a valuable tool for honing your problem-solving skills and interview preparation. They also cover essential topics like removing duplicates from arrays and strings, which are frequently encountered tasks in programming. | sandeep_kundekar_2b86f18f |
|
1,913,014 | Implementing Secure Authentication in Next.js with JWT and MongoDB. Protect Routes using middleware | In modern web development, implementing a robust authentication system is crucial for securing user... | 0 | 2024-07-05T16:28:37 | https://dev.to/abdur_rakibrony_97cea0e9/implementing-secure-authentication-in-nextjs-with-jwt-and-mongodb-protect-routes-using-middleware-4389 | nextjs, middleware, authjs, security | In modern web development, implementing a robust authentication system is crucial for securing user data and providing a seamless user experience. In this blog post, we'll explore how to implement authentication in a Next.js application using JSON Web Tokens (JWT) and MongoDB. We'll cover the key aspects of this implementation, including middleware, token management, and user registration and login.
**Setting Up Middleware for Route Protection**
The middleware function plays a crucial role in protecting routes and ensuring that users are authenticated before accessing certain pages. Here's the implementation of the middleware function:
```
import { getToken, verifyToken } from "@/lib/auth";
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export async function middleware(request: NextRequest) {
const token = getToken(request);
const { pathname } = request.nextUrl;
if (token) {
const payload = await verifyToken(token);
if (payload) {
if (pathname === "/" || pathname === "/register") {
return NextResponse.redirect(new URL("/home", request.url));
}
return NextResponse.next();
}
}
if (pathname !== "/" && pathname !== "/register") {
return NextResponse.redirect(new URL("/", request.url));
}
return NextResponse.next();
}
export const config = {
matcher: ["/((?!api|_next/static|_next/image|favicon.ico).*)"],
};
```
This middleware function checks if a token exists and verifies it. If the token is valid, it allows the user to proceed to the requested route. If the token is invalid or missing, it redirects the user to the login page.
**Managing Tokens with JSON Web Tokens (JWT)**
To handle token generation and verification, we use the jose library. Below are the utility functions for managing JWTs:
```
import { jwtVerify, SignJWT } from 'jose';
import { NextRequest } from "next/server";
export function getToken(req: NextRequest) {
return req.cookies.get("token")?.value;
}
export async function verifyToken(token: string) {
if (!token) return null;
try {
const secret = new TextEncoder().encode(process.env.JWT_SECRET);
const { payload } = await jwtVerify(token, secret);
return payload as { userId: string };
} catch (error) {
console.error('Token verification failed:', error);
return null;
}
}
export async function createToken(payload: { userId: string }) {
const secret = new TextEncoder().encode(process.env.JWT_SECRET!);
const token = await new SignJWT(payload)
.setProtectedHeader({ alg: 'HS256' })
.setExpirationTime('1d')
.sign(secret);
return token;
}
```
These functions handle retrieving the token from cookies, verifying the token, and creating a new token. The verifyToken function ensures that the token is valid and extracts the payload, while createToken generates a new token with a specified payload.
**User Registration and Login**
For user registration and login, we need to handle form data, hash passwords, and create tokens. Here's how we achieve this:
```
"use server";
import { cookies } from "next/headers";
import bcrypt from "bcryptjs";
import { connectToDB } from "@/lib/db";
import User from "@/models/User";
import { loginSchema, registerSchema } from "@/zod/schema";
import { createToken } from "@/lib/auth";
export async function registerUser(formData: FormData) {
await connectToDB();
const parsedData = registerSchema.parse(
Object.fromEntries(formData.entries())
);
const existingUser = await User.findOne({ email: parsedData.email });
if (existingUser) {
throw new Error("Email already exists");
}
const hashedPassword = await bcrypt.hash(parsedData.password, 10);
const newUser = await User.create({
...parsedData,
password: hashedPassword,
createdAt: new Date(),
});
const token = await createToken({ userId: newUser._id.toString() });
const cookieStore = cookies();
cookieStore.set("token", token, {
httpOnly: true,
secure: process.env.NODE_ENV === "production",
});
}
export async function loginUser(formData: FormData) {
const { email, password } = loginSchema.parse(
Object.fromEntries(formData.entries())
);
await connectToDB();
const user = await User.findOne({ email });
if (!user) {
throw new Error("Invalid credentials");
}
const isPasswordValid = await bcrypt.compare(password, user.password);
if (!isPasswordValid) {
throw new Error("Invalid credentials");
}
const token = await createToken({ userId: user._id.toString() });
const cookieStore = cookies();
cookieStore.set("token", token, {
httpOnly: true,
secure: process.env.NODE_ENV === "production",
});
}
export async function logoutUser() {
cookies().set("token", "", { expires: new Date(0) });
}
```
registerUser: This function handles user registration. It connects to the database, parses the form data, checks if the email already exists, hashes the password, creates a new user, generates a token, and sets the token as an HTTP-only cookie.
loginUser: This function handles user login. It parses the form data, connects to the database, verifies the user's credentials, generates a token, and sets the token as an HTTP-only cookie.
logoutUser: This function handles user logout by clearing the token cookie.
**Conclusion**
Implementing authentication in a Next.js application using JWT and MongoDB provides a secure and efficient way to manage user sessions. By leveraging middleware, JWT, and MongoDB, we can protect routes, verify tokens, and handle user registration and login seamlessly. | abdur_rakibrony_97cea0e9 |
1,913,012 | Building a Secure OTP-based Login System in Next.js | In today's digital age, ensuring the security of user authentication is paramount. One effective... | 0 | 2024-07-05T16:22:11 | https://dev.to/abdur_rakibrony_97cea0e9/building-a-secure-otp-based-login-system-in-nextjs-2od8 | nextjs, react, authjs, otp | In today's digital age, ensuring the security of user authentication is paramount. One effective method is using One-Time Passwords (OTPs) for login. In this post, we'll walk through how to implement an OTP-based login system using Next.js, with both email and phone number options.
**Why Use OTP?**
OTPs add an extra layer of security by requiring a temporary code sent to the user's email or phone number. This method reduces the risk of unauthorized access, as the code is valid for a short period.
**Setting Up the Frontend**
We start by creating a login component that captures the user's email or phone number and handles OTP sending and verification.
```
//login component
"use client";
import { useState, useEffect } from "react";
import { Input } from "@/components/ui/input";
import { Lock, Mail, Phone } from "lucide-react";
import { Button } from "@/components/ui/button";
import {
InputOTP,
InputOTPGroup,
InputOTPSlot,
} from "@/components/ui/input-otp";
import { SendOTP } from "@/utils/SendOTP";
import { useRouter } from "next/navigation";
import { signIn } from "next-auth/react";
const Login = () => {
const [contact, setContact] = useState("");
const [otp, setOtp] = useState(false);
const [otpCode, setOtpCode] = useState("");
const [receivedOtpCode, setReceivedOtpCode] = useState("");
const [timeLeft, setTimeLeft] = useState(60);
const [timerRunning, setTimerRunning] = useState(false);
const [resendClicked, setResendClicked] = useState(false);
const [hasPassword, setHasPassword] = useState(false);
const [password, setPassword] = useState("");
const [isIncorrectOTP, setIsIncorrectOTP] = useState(false);
const router = useRouter();
const handleSendOtp = async () => {
setOtp(true);
startTimer();
setResendClicked(true);
const data = await SendOTP(contact);
if (data?.hasPassword) {
setHasPassword(data?.hasPassword);
}
if (data?.otp) {
setReceivedOtpCode(data?.otp);
}
};
const handleLogin = async () => {
if (otpCode === receivedOtpCode) {
await signIn("credentials", {
redirect: false,
email: isNaN(contact) ? contact : contact + "@gmail.com",
});
router.push("/");
} else {
setIsIncorrectOTP(true);
}
};
const startTimer = () => {
setTimeLeft(60);
setTimerRunning(true);
};
const resendOTP = () => {
setTimerRunning(false);
startTimer();
setResendClicked(true);
handleSendOtp();
};
useEffect(() => {
let timer;
if (timerRunning) {
timer = setTimeout(() => {
if (timeLeft > 0) {
setTimeLeft((prevTime) => prevTime - 1);
} else {
setTimerRunning(false);
}
}, 1000);
}
return () => clearTimeout(timer);
}, [timeLeft, timerRunning]);
useEffect(() => {
if (contact === "" || contact === null) {
setOtp(false);
setOtpCode("");
setTimeLeft(60);
setTimerRunning(false);
setResendClicked(false);
}
}, [contact]);
return (
<div>
<div className="relative w-full max-w-sm">
{contact === "" || isNaN(contact) ? (
<Mail
className="absolute left-3 top-1/2 transform -translate-y-1/2 text-gray-400"
size={20}
/>
) : (
<Phone
className="absolute left-3 top-1/2 transform -translate-y-1/2 text-gray-400"
size={20}
/>
)}
<Input
type="text"
name="contact"
value={contact}
placeholder="Email or phone"
onChange={(e) => setContact(e.target.value)}
disabled={contact && otp}
className="pl-10"
/>
</div>
{hasPassword ? (
<div className="relative w-full max-w-sm mt-4">
<Lock
className="absolute left-3 top-1/2 transform -translate-y-1/2 text-gray-400"
size={20}
/>
<Input
type="password"
name="password"
value={password}
placeholder="Password"
onChange={(e) => setPassword(e.target.value)}
className="pl-10"
/>
</div>
) : (
<div>
{contact && otp && (
<div className="text-center text-green-500 text-base mt-1">
OTP sent successfully. Please enter OTP below.
</div>
)}
{contact && otp && (
<div className="space-y-2 w-full flex flex-col items-center justify-center my-2">
<InputOTP
maxLength={4}
value={otpCode}
onChange={(value) => setOtpCode(value)}
isError={isIncorrectOTP}
>
<InputOTPGroup>
<InputOTPSlot index={0} />
<InputOTPSlot index={1} />
<InputOTPSlot index={2} />
<InputOTPSlot index={3} />
</InputOTPGroup>
</InputOTP>
<div>
{resendClicked && timeLeft > 0 ? (
<p className="text-sm">
Resend OTP available in{" "}
<span className="text-blue-500">
{timeLeft > 0 ? `${timeLeft}` : ""}
</span>
</p>
) : (
<Button
variant="link"
onClick={resendOTP}
className="text-blue-500"
>
Resend OTP
</Button>
)}
</div>
</div>
)}
</div>
)}
{receivedOtpCode ? (
<Button
onClick={handleLogin}
className="w-full mt-4 bg-green-500 hover:bg-green-400"
>
Login
</Button>
) : (
<Button
onClick={handleSendOtp}
className="w-full mt-4 bg-green-500 hover:bg-green-400"
>
Next
</Button>
)}
{isIncorrectOTP && (
<p className="text-red-500 text-sm text-center mt-2">
Incorrect OTP. Please try again.
</p>
)}
</div>
);
};
export default Login;
```
This component manages the user interaction for entering their contact information, sending the OTP, and handling the login process. It includes state management for various aspects such as OTP verification, countdown timer, and error handling.
**Backend API for OTP Generation and Sending**
Next, we'll set up the backend to handle OTP generation and sending. The OTP can be sent via email or SMS based on the user's contact information.
```
//OTP Generation and Sending
import { sendVerificationSMS } from "@/lib/sendSMS";
import User from "@/models/user";
import { NextResponse } from "next/server";
import { connectToDB } from "@/lib/db";
import nodemailer from "nodemailer";
const generateOTP = () => {
const digits = "0123456789";
let OTP = "";
for (let i = 0; i < 4; i++) {
OTP += digits[Math.floor(Math.random() * 10)];
}
return OTP;
};
const sendVerificationEmail = async (contact, otp) => {
try {
let transporter = nodemailer.createTransport({
service: "gmail",
auth: {
user: "[email protected]",
pass: "your-email-password",
},
});
let info = await transporter.sendMail({
from: `"Your Company" <[email protected]>`,
to: contact,
subject: "Verification Code",
text: `Your verification code is: ${otp}`,
});
return info.messageId;
} catch (error) {
console.error("Error sending email:", error);
throw new Error("Error sending verification email");
}
};
export async function POST(req, res) {
try {
await connectToDB();
const otp = generateOTP();
const { contact } = await req.json();
const existingUser = await User.findOne({
email: isNaN(contact) ? contact : contact + "@gmail.com",
});
if (isNaN(contact)) {
await sendVerificationEmail(contact, otp);
return NextResponse.json({
message: "Verification code has been sent to your email",
otp,
});
} else {
await sendVerificationSMS(contact, otp);
return NextResponse.json({
message: "Verification code has been sent",
otp,
});
}
} catch (error) {
console.error(error);
return NextResponse.error(
"An error occurred while processing the request."
);
}
}
```
This backend code handles OTP generation and sends it either via email or SMS depending on the user's input. The generateOTP function creates a random 4-digit OTP, and the sendVerificationEmail and sendVerificationSMS functions send the OTP to the user.
**Conclusion**
Implementing an OTP-based login system enhances the security of your application by adding an additional verification step. This system ensures that only users with access to the provided email or phone number can log in, protecting against unauthorized access.
Feel free to modify and expand upon this basic implementation to suit your specific requirements. Happy coding!
| abdur_rakibrony_97cea0e9 |
1,913,006 | Serverless Application using AWS Lambda ,Api Gateway,AWS Amplify | Creating a serverless application using AWS Lambda, API Gateway, and AWS Amplify involves a series of... | 0 | 2024-07-05T16:20:11 | https://dev.to/harshana_vivekanandhan_88/serverless-application-using-aws-lambda-api-gatewayaws-amplify-36hc | aws, html, cloud | Creating a serverless application using AWS Lambda, API Gateway, and AWS Amplify involves a series of steps to set up and integrate these services. Here's a high-level overview of the process:
### 1. **Set Up AWS Lambda Function**
AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
- **Create a Lambda Function**:
- Go to the AWS Management Console.
- Navigate to AWS Lambda.
- Click on "Create function."
- Choose a blueprint or start from scratch.
- Configure the function (name, runtime, permissions, etc.).
- Write your function code or upload a zip file.
- **Configure the Lambda Function**:
- Set up environment variables if needed.
- Configure the function's execution role to allow necessary permissions.
### 2. **Create an API with Amazon API Gateway**
Amazon API Gateway allows you to create and publish RESTful APIs.
- **Create an API**:
- Go to the AWS Management Console.
- Navigate to API Gateway.
- Click on "Create API" and choose REST API.
- **Define Resources and Methods**:
- Create resources (e.g., `/items`).
- Add methods (e.g., GET, POST) to the resources.
- Integrate these methods with your Lambda functions by specifying the Lambda ARN.
- **Deploy the API**:
- Create a new stage (e.g., `dev`).
- Deploy the API to this stage.
- Note the invoke URL provided by API Gateway for later use.
### 3. **Set Up AWS Amplify**
AWS Amplify is a set of tools and services to help front-end web and mobile developers build scalable full-stack applications.
- **Initialize a New Amplify Project**:
- Install Amplify CLI: `npm install -g @aws-amplify/cli`.
- Configure the CLI: `amplify configure` (follow the prompts to set up your AWS profile).
- Initialize your Amplify project: `amplify init`.
- **Add API to Your Amplify Project**:
- Add an API: `amplify add api`.
- Choose REST when prompted and provide the necessary details (e.g., path, Lambda integration).
- Push the changes to the cloud: `amplify push`.
### 4. **Build and Deploy Your Frontend with Amplify**
- **Create Your Frontend Application**:
- You can use frameworks like React, Angular, or Vue.js.
- Amplify supports hosting for static websites.
- **Host Your Application with Amplify**:
- Go to the AWS Amplify Console.
- Connect your repository (e.g., GitHub, GitLab).
- Configure build settings and deploy.
### 5. **Testing and Iteration**
- Test your application end-to-end.
- Make necessary adjustments to the Lambda functions, API Gateway configurations, or frontend code.
- Utilize Amplify's CI/CD capabilities for automatic deployment on code changes.
### Example: Simple Serverless To-Do Application
Here’s a basic example of a serverless To-Do application using AWS Lambda, API Gateway, and AWS Amplify:
1. **Lambda Function**: A simple Lambda function to handle CRUD operations on to-do items.
2. **API Gateway**: Configure a REST API with paths like `/todos` and methods like GET, POST, DELETE.
3. **Amplify Frontend**: A React application integrated with the API.
| harshana_vivekanandhan_88 |
1,913,005 | Dart Flutter Theme | A post by Aadarsh Kunwar | 0 | 2024-07-05T16:19:01 | https://dev.to/aadarshk7/dart-flutter-theme-3llc | aadarshk7 |
||
1,913,004 | Automating user and group management with Bash | Introduction As a SysOps engineer, managing users and groups on a server is one of your... | 0 | 2024-07-05T16:18:32 | https://dev.to/tim_shantel/automating-user-and-group-management-with-bash-ooc |
### Introduction
As a SysOps engineer, managing users and groups on a server is one of your routine tasks. This process can be quite time-consuming and susceptible to errors, particularly when handling numerous users. Automating these tasks is crucial for achieving efficiency and reliability. This article will guide you through a Bash script that automates the creation of users and groups, sets up home directories with correct permissions, generates random passwords, and logs all actions.
### Bash
Bash which stands for "Bourne Again Shell" is a command language interpreter and scripting language commonly used in Unix-like operating systems, such as linux and macOS. It's the default shell for most linux distributions and is widely used for automation tasks, managing system operations, and write scrips.
#### What is Bash script?
A Bash script is a file typically ending with the extension .sh that contain a logical series of related commands to be executed
This project was inspired by HNG internship 11 https://hng.tech/hire, DevOps track of stage one.
visit https://hng.tech/internship to learn more about the program
### Prerequisite
Before we start, ensure you have:
- Access to a linux server with root privileges
- Basic knowledge of Bash scripting
### Script Overview
The script we'll create will:
1. Create users and groups
2. set up home directories with the correct permissions
3. Generate random passwords for users
4. Log all actions to a specified log file
### User Management Automation project
Let's dive into an extensive project that will show us a step by step guide on how to create user and group management with Bash script.
Here is a step by step guide on how to create user and groups management:
## Creating the script file
First and foremost, Create a new Bash script file
```
touch create_user.sh
```
Open the file in your preferred text editor:
```
vim create_user.sh
```
Create an employee file where users and group would be located
```
touch employeefile.txt
```
edit your user_data.txt file with a text editor and enter the following user and group below:
```
vim employeefile.txt
```
### Example Input File
light; sudo,dev, www-data
idimma;sudo
mayowa;dev,www-data
**Insert the following script below into your 'create_users.sh' file**
### The Script
```
#!/bin/bash
# Define log and password files
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
# Create log and password files if they don't exist
touch $LOG_FILE
mkdir -p /var/secure
touch $PASSWORD_FILE
# Function to log messages
log_message() {
echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
# Function to generate random password
generate_password() {
tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo ''
}
# Check if the input file is provided
if [ $# -ne 1 ]; then
echo "Usage: $0 <input_file>"
exit 1
fi
# Read the input file
INPUT_FILE=$1
# Check if the input file exists
if [ ! -f $INPUT_FILE ]; then
echo "Input file not found!"
exit 1
fi
while IFS=';' read -r username groups; do
# Remove leading and trailing whitespaces
username=$(echo $username | xargs)
groups=$(echo $groups | xargs)
if id "$username" &>/dev/null; then
log_message "User $username already exists. Skipping..."
continue
fi
# Create a personal group for the user
groupadd $username
if [ $? -ne 0 ]; then
log_message "Failed to create group $username."
continue
fi
log_message "Group $username created successfully."
# Create user and add to personal group
useradd -m -g $username -s /bin/bash $username
if [ $? -ne 0 ]; then
log_message "Failed to create user $username."
continue
fi
log_message "User $username created successfully."
# Create additional groups if they don't exist and add user to groups
IFS=',' read -ra group_array <<< "$groups"
for group in "${group_array[@]}"; do
group=$(echo $group | xargs)
if [ -z "$group" ]; then
continue
fi
if ! getent group $group >/dev/null; then
groupadd $group
if [ $? -ne 0 ]; then
log_message "Failed to create group $group."
continue
fi
log_message "Group $group created successfully."
fi
usermod -aG $group $username
log_message "User $username added to group $group."
done
# Set up home directory permissions
chmod 700 /home/$username
chown $username:$username /home/$username
log_message "Permissions set for home directory of $username."
# Generate random password and store it
password=$(generate_password)
echo "$username:$password" | chpasswd
echo "$username:$password" >> $PASSWORD_FILE
log_message "Password set for user $username."
done < "$INPUT_FILE"
log_message "User and group creation process completed."
exit 0
```
**Make the script executable**
```
chmod +x create_user.sh
```
**Execute the script**
```
sudo bash ./create_users.sh employeefile.txt
```
## Defining the Script
Start by defining the script header and setting some initial variables:
**Define Log and Password Files**
```
#!/bin/bash
# Define log and password files
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
```
These lines set the paths for the log file and the file where passwords will be stored.
**Create Log and Password Files if They Don't Exist**
```
# Create log and password files if they don't exist
touch $LOG_FILE
mkdir -p /var/secure
touch $PASSWORD_FILE
```
- touch $LOG_FILE creates the log file if it doesn't already exist.
- mkdir -p /var/secure creates the directory /var/secure if it doesn't exist.
- touch $PASSWORD_FILE creates the password file if it doesn't exist.
**Function to Log Messages**
```
# Function to log messages
log_message() {
echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
```
This function logs messages with a timestamp. tee -a $LOG_FILE appends the log message to the log file while also displaying it on the console.
**Function to Generate Random Password**
```
# Function to generate random password
generate_password() {
tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo ''
}
```
This function generates a random 12-character password using characters from A-Z, a-z, and 0-9.
**Check if the Input File is Provided**
```
# Check if the input file is provided
if [ $# -ne 1 ]; then
echo "Usage: $0 <input_file>"
exit 1
fi
```
This part ensures that the script is called with exactly one argument (the input file). If not, it displays a usage message and exits.
**Read the Input File**
```
# Read the input file
INPUT_FILE=$1
```
This assigns the first argument (input file) to the variable INPUT_FILE.
**Check if the Input File Exists**
```
# Check if the input file exists
if [ ! -f $INPUT_FILE ]; then
echo "Input file not found!"
exit 1
fi
```
This checks if the input file exists. If not, it prints an error message and exits.
**Process Each Line of the Input File**
```
while IFS=';' read -r username groups; do
# Remove leading and trailing whitespaces
username=$(echo $username | xargs)
groups=$(echo $groups | xargs)
```
- IFS=';' read -r username groups reads each line of the input file, splitting the line into username and groups using ; as a delimiter.
- xargs is used to trim leading and trailing whitespaces from username and groups.
**Check if the User Already Exists**
```
if id "$username" &>/dev/null; then
log_message "User $username already exists. Skipping..."
continue
fi
```
This checks if the user already exists. If the user exists, it logs a message and skips to the next iteration.
**Create a Personal Group for the User**
```
# Create a personal group for the user
groupadd $username
if [ $? -ne 0 ]; then
log_message "Failed to create group $username."
continue
fi
log_message "Group $username created successfully."
```
- groupadd $username creates a new group with the same name as the user.
- $? checks the exit status of the groupadd command. If it's not zero (indicating an error), it logs a failure message and skips to the next iteration.
- If the group is created successfully, it logs a success message.
Create the User and Add to Personal Group
```
# Create user and add to personal group
useradd -m -g $username -s /bin/bash $username
if [ $? -ne 0 ]; then
log_message "Failed to create user $username."
continue
fi
log_message "User $username created successfully."
```
- useradd -m -g $username -s /bin/bash $username creates the user with a home directory and assigns the personal group.
- The exit status is checked, and appropriate messages are logged.
**Create Additional Groups and Add User to Them**
```
# Create additional groups if they don't exist and add user to groups
IFS=',' read -ra group_array <<< "$groups"
for group in "${group_array[@]}"; do
group=$(echo $group | xargs)
if [ -z "$group" ]; then
continue
fi
if ! getent group $group >/dev/null; then
groupadd $group
if [ $? -ne 0 ]; then
log_message "Failed to create group $group."
continue
fi
log_message "Group $group created successfully."
fi
usermod -aG $group $username
log_message "User $username added to group $group."
done
```
- IFS=',' read -ra group_array <<< "$groups" splits the groups string into an array.
- The script iterates over the array, trims whitespaces, checks if the group is empty, and continues to the next iteration if it is.
- It checks if each group exists using getent group $group. If the group does not exist, it creates it.
- The user is added to the group using usermod -aG $group $username.
Appropriate messages are logged throughout
**Set Up Home Directory Permissions**
```
# Set up home directory permissions
chmod 700 /home/$username
chown $username:$username /home/$username
log_message "Permissions set for home directory of $username."
```
- chmod 700 /home/$username sets the permissions of the user's home directory.
- chown $username:$username /home/$username changes the ownership of the home directory to the user.
- A log message is recorded.
**Generate Random Password and Store It**
```
# Generate random password and store it
password=$(generate_password)
echo "$username:$password" | chpasswd
echo "$username:$password" >> $PASSWORD_FILE
log_message "Password set for user $username."
```
- password=$(generate_password) generates a random password.
- echo "$username:$password" | chpasswd sets the user's password.
- The password is stored in the password file.
- A log message is recorded.
**Complete the Process**
```
done < "$INPUT_FILE"
log_message "User and group creation process completed."
exit 0
```
- The done statement ends the while loop.
- A log message indicates that the process is complete.
- exit 0 exits the script successfully.
This detailed breakdown should help you understand each part of the script and its function as you run it in your terminal. | tim_shantel |
|
694,268 | Lit and Figma | In this article I will go over how to set up a Lit web component and use it to create a figma plugin.... | 0 | 2021-05-11T00:22:31 | https://rodydavis.com/posts/figma-and-lit/ | ---
title: Lit and Figma
published: true
date: 2021-05-10 00:00:00 UTC
tags:
canonical_url: https://rodydavis.com/posts/figma-and-lit/
---
In this article I will go over how to set up a [Lit](https://lit.dev/) web component and use it to create a figma plugin.
**TLDR** You can find the final source [here](https://github.com/rodydavis/figma_lit_example).
## Prerequisites [#](#prerequisites)
- Vscode
- Figma Desktop
- Node
- Typescript
## Getting Started [#](#getting-started)
We can start off by creating a empty directory and naming it with `snake_case` whatever we want.
```
mkdir figma_lit_examplecd figma_lit_example
```
### Web Setup [#](#web-setup)
Now we are in the `figma_lit_example` directory and can setup Figma and Lit. Let's start with node.
```
npm init -y
```
This will setup the basics for a node project and install the packages we need. Now lets add some config files. Now open the `package.json` and replace it with the following:
```
{ "name": "figma_lit_example", "version": "1.0.0", "description": "Lit Figma Plugin", "dependencies": { "lit": "^2.0.0-rc.1" }, "devDependencies": { "@figma/plugin-typings": "^1.23.0", "html-webpack-inline-source-plugin": "^1.0.0-beta.2", "html-webpack-plugin": "^4.3.0", "css-loader": "^5.2.4", "ts-loader": "^8.0.0", "typescript": "^4.2.4", "url-loader": "^4.1.1", "webpack": "^4.44.1", "webpack-cli": "^4.6.0" }, "scripts": { "dev": "npx webpack --mode=development --watch", "copy": "mkdir -p lit-plugin && cp ./manifest.json ./lit-plugin/manifest.json && cp ./dist/ui.html ./lit-plugin/ui.html && cp ./dist/code.js ./lit-plugin/code.js", "build": "npx webpack --mode=production && npm run copy", "zip": "npm run build && zip -r lit-plugin.zip lit-plugin" }, "browserslist": ["last 1 Chrome versions"], "keywords": [], "author": "", "license": "ISC"}
```
This will add everything we need and add the scripts we need for development and production. Then run the following:
```
npm i
```
This will install everything we need to get started. Now we need to setup some config files.
```
touch tsconfig.jsontouch webpack.config.ts
```
This will create 2 files. Now open up `tsconfig.json` and paste the following:
```
{ "compilerOptions": { "target": "es2017", "module": "esNext", "moduleResolution": "node", "lib": ["es2017", "dom", "dom.iterable"], "typeRoots": ["./node_modules/@types", "./node_modules/@figma"], "declaration": true, "sourceMap": true, "inlineSources": true, "noUnusedLocals": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true, "experimentalDecorators": true, "skipLibCheck": true, "strict": true, "noImplicitAny": false, "outDir": "./lib", "baseUrl": "./packages", "importHelpers": true, "plugins": [{ "name": "ts-lit-plugin", "rules": { "no-unknown-tag-name": "error", "no-unclosed-tag": "error", "no-unknown-property": "error", "no-unintended-mixed-binding": "error", "no-invalid-boolean-binding": "error", "no-expressionless-property-binding": "error", "no-noncallable-event-binding": "error", "no-boolean-in-attribute-binding": "error", "no-complex-attribute-binding": "error", "no-nullable-attribute-binding": "error", "no-incompatible-type-binding": "error", "no-invalid-directive-binding": "error", "no-incompatible-property-type": "error", "no-unknown-property-converter": "error", "no-invalid-attribute-name": "error", "no-invalid-tag-name": "error", "no-unknown-attribute": "off", "no-unknown-event": "off", "no-unknown-slot": "off", "no-invalid-css": "off" } }] }, "include": ["src/**/*.ts"], "references": []}
```
This is a basic typescript config. Now open up `webpack.config.ts` and paste the following:
```
const HtmlWebpackInlineSourcePlugin = require("html-webpack-inline-source-plugin");const HtmlWebpackPlugin = require("html-webpack-plugin");const path = require("path");module.exports = (env, argv) => ({ mode: argv.mode === "production" ? "production" : "development", devtool: argv.mode === "production" ? false : "inline-source-map", entry: { ui: "./src/ui.ts", code: "./src/code.ts", app: "./src/my-app.ts", }, module: { rules: [{ test: /\.tsx?$/, use: "ts-loader", exclude: /node_modules/ }, { test: /\.css$/, use: ["style-loader", { loader: "css-loader" }] }, { test: /\.(png|jpg|gif|webp|svg)$/, loader: "url-loader" }, ], }, resolve: { extensions: [".ts", ".js"] }, output: { filename: "[name].js", path: path.resolve(__dirname, "dist"), }, plugins: [new HtmlWebpackPlugin({ template: path.resolve(__dirname, "ui.html"), filename: "ui.html", inject: true, inlineSource: ".(js|css)$", chunks: ["ui"], }), new HtmlWebpackInlineSourcePlugin(HtmlWebpackPlugin), ],});
```
Now we need to create the ui for the plugin:
```
touch ui.html
```
Open up `ui.html` and add the following:
```
<my-app></my-app>
```
Now we need a manifest file for the figma plugin:
```
touch manifest.json
```
Open `manifest.json` and add the following:
```
{ "name": "figma_lit_example", "id": "973668777853442323", "api": "1.0.0", "main": "code.js", "ui": "ui.html"}
```
Now we need to create our web component:
```
mkdir srccd srctouch my-app.tstouch code.tstouch ui.tscd ..
```
Open `ui.ts` and paste the following:
```
import "./my-app";
```
Open `my-app.ts` and paste the following:
```
import { html, LitElement } from "lit";import { customElement, query } from "lit/decorators.js";@customElement("my-app")export class MyApp extends LitElement { @query("#count") countInput!: HTMLInputElement; render() { return html` <div> <h2>Rectangle Creator</h2> <p>Count: <input id="count" value="5" /></p> <button id="create" @click=${this.create}>Create</button> <button id="cancel" @click=${this.cancel}>Cancel</button> </div> `; } create() { const count = parseInt(this.countInput.value, 10); this.sendMessage("create-rectangles", { count }); } cancel() { this.sendMessage("cancel"); } private sendMessage(type: string, content: Object = {}) { const message = { pluginMessage: { type: type, ...content } }; parent.postMessage(message, "*"); }}
```
Open `code.ts` and paste the following:
```
const options: ShowUIOptions = { width: 250, height: 200,};figma.showUI( __html__ , options);figma.ui.onmessage = msg => { switch (msg.type) { case 'create-rectangles': const nodes: SceneNode[] = []; for (let i = 0; i < msg.count; i++) { const rect = figma.createRectangle(); rect.x = i * 150; rect.fills = [{ type: 'SOLID', color: { r: 1, g: 0.5, b: 0 } }]; figma.currentPage.appendChild(rect); nodes.push(rect); } figma.currentPage.selection = nodes; figma.viewport.scrollAndZoomIntoView(nodes); break; default: break; } figma.closePlugin();};
```
## Building the Plugin [#](#building-the-plugin)
Now that we have all the code in place we can build the plugin and test it in Figma.
```
npm run build
```
#### Step 1 [#](#step-1)
Download and open the desktop version of Figma.
[https://www.figma.com/downloads/](https://www.figma.com/downloads/)
#### Step 2 [#](#step-2)
Open the menu and navigate to “Plugins > Manage plugins”
![](https://rodydavis.com/img/figma/manage-plugin.png)
#### Step 3 [#](#step-3)
Click on the plus icon to add a local plugin.
![](https://rodydavis.com/img/figma/add-plugin.png)
Click on the box to link to an existing plugin to navigate to the `lit-plugin` folder that was created after the build process in your source code and select `manifest.json`.
![](https://rodydavis.com/img/figma/create-plugin.png)
#### Step 4 [#](#step-4)
To run the plugin navigate to “Plugins > Development > figma\_lit\_example” to launch your plugin.
![](https://rodydavis.com/img/figma/run-lit-plugin.png)
#### Step 5 [#](#step-5)
Now your plugin should launch and you can create 5 rectangles on the canvas.
![](https://rodydavis.com/img/figma/plugin-overview.png)
If everything worked you will have 5 new rectangles on the canvas focused by figma.
![](https://rodydavis.com/img/figma/rectangles.png)
## Conclusion [#](#conclusion)
If you want to learn more about building a plugin in Figma you can read more [here](https://www.figma.com/plugin-docs/intro/) and for Lit you can read the docs [here](https://lit.dev/).
![](https://rodydavis.com/.netlify/functions/ga?v=1&_v=j83&t=pageview&dr=https%3A%2F%2Frss-feed-reader.com&_s=1&dh=rodydavis.com&dp=%2Fposts%2Ffigma-and-lit%2F&ul=en-us&de=UTF-8&dt=Lit%20and%20Figma&tid=G-JQNPVBL9DR) | rodydavis |
|
1,913,003 | Middleware, Setting Up Custom Logging And CORS - FastAPI Beyond CRUD (Part 17) | In this video, we explore the important concept of middleware in FastAPI. Middleware acts as a bridge... | 0 | 2024-07-05T16:17:00 | https://dev.to/jod35/middleware-setting-up-custom-logging-and-cors-fastapi-beyond-crud-part-17-3c0a | fastapi, python, api, programming | In this video, we explore the important concept of middleware in FastAPI. Middleware acts as a bridge between incoming requests and application logic, allowing for custom processing at various stages of request handling.
Throughout the tutorial, we not only establish a custom logger for our application but also implement middleware to enhance functionality.
{%youtube 7ndnHhxcL-Q%} | jod35 |
1,913,002 | Creating a Simple ASP.NET Core Web API for Testing Purposes | Introduction ASP.NET Core NET Core is a powerful framework for building web APIs. In this... | 0 | 2024-07-05T16:16:18 | https://shekhartarare.com/Archive/2024/6/creating-a-simple-asp-dotnet-core-api | webdev, api, aspdotnet, tutorial | ## Introduction
ASP.NET Core NET Core is a powerful framework for building web APIs. In this tutorial, we will walk you through creating a simple API for testing. Whether you are a beginner or looking to refresh your knowledge, this guide will help you set up a basic web API project in no time.
## Prerequisites
Before we begin, ensure you have the following installed on your development machine:
- Visual Studio (2019 or later)
- .NET Core SDK (I am using 8 for this tutorial)
## Step-by-Step Guide to Creating a Simple ASP.NET Core API
**Step 1: Create a New Web API Project**
- Open Visual Studio.
- Click on Create a new project.
- Select the template ASP.NET Core NET Core Web API Application and click Next.
![Create a new project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyqoigjrj4ow04esr37v.png)
- Name your project and click Next.
- Provide Additional information. I am selecting . NET 8.0 as a framework. Click Create.
![additional info](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfufkq8p8z92h1j60ywc.png)
- Once the project gets created. You will find one controller by default available called WeatherForecastController.
![api](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38jyp42zbxn2dbft5fpo.png)
**Step 2: Create our API**
- Right click on Controllers > Add > Controller.
- Click on API on the left and select API Controller — Empty.
![Create controller](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syjy5t9dvgaayizcc69l.png)
- Give it a name and click on Add.
- Add the following code. I am adding one get method which is just returning “Hello World!”.
```
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace TestApi.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class TestApiController : ControllerBase
{
[HttpGet]
public IActionResult Get()
{
return Ok(new { Message = "Hello, World!" });
}
}
}
```
**Step 3: Run the Application**
- Press F5 to run the application.
- You will see the Swagger UI with your API endpoints listed.
![run project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ziphv05qq7vxq5mvsvv0.png)
**Step 4: Test the API Endpoints**
- Using the Swagger UI, you can test the endpoints.
- Click on execute and it will give the data as response.
![test api](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xi6k8sasz32ke7yfhllf.png)
- If you want to check which URL it’s hitting then open developer tools and go to network tab. Click on execute button for the endpoint and you will see one request on the network. Click on it. On the Headers tab, you will get the Request URL. That’s the API URL.
![Inspect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmk0h84jgr0j5avrua47.png)
- You can even run the URL on the new tab and it will give the response.
![Output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e651gxo7yfenmsq1lz3c.png)
## Conclusion
Congratulations! You’ve successfully created a simple ASP.NET Core NET Core API for testing purposes. This basic setup can serve as a foundation for more complex APIs. Remember, practice makes perfect, so continue experimenting and building upon what you’ve learned.
In the next blog, we will schedule this test API call with Hangfire. Here’s the [link](https://shekhartarare.com/Archive/2024/6/step-by-step-guide-to-scheduling-api-calls-with-hangfire) for that blog. | shekhartarare |
1,913,000 | How to Use ServBay's Built-in Composer for PHP Project Management | As a powerful integrated web development tool, ServBay comes with Composer, and its usage is very... | 0 | 2024-07-05T16:13:10 | https://dev.to/servbay/how-to-use-servbays-built-in-composer-for-php-project-management-41l5 | php, webdev, programming, javascript | As a powerful integrated [web development tool](https://www.servbay.com), ServBay comes with Composer, and its usage is very straightforward. Composer is a dependency management tool for PHP, widely used in modern PHP development. It helps developers easily manage project dependencies and automatically handle dependency relationships. With ServBay, developers can effortlessly introduce third-party libraries, manage project dependencies, and autoload class files.
### Introduction to Composer
Composer is a tool for managing dependencies in PHP projects. It allows developers to declare the external libraries their project relies on and automatically install and update these libraries. Composer can manage not only PHP libraries but also other types of packages, such as frameworks and plugins.
### Main Features
1. **Dependency Management**: Composer can automatically handle project dependencies, ensuring compatibility of all library versions.
2. **Autoloading**: Composer provides an autoloading feature to help developers automatically load class files.
3. **Version Control**: Composer allows developers to specify the versions of the dependency libraries, ensuring project stability and compatibility.
4. **Package Management**: Composer can manage various types of packages, including PHP libraries, frameworks, and plugins.
5. C**ommunity Support**: Composer has a rich community resource and package repository where developers can easily find the needed libraries.
### ServBay's Built-in Composer
[ServBay](https://www.servbay.com) supports multiple PHP versions and has Composer enabled by default. There is no need for additional installation steps, and developers can directly use Composer in ServBay for project management.
### Managing Project Dependencies with Composer
Composer manages project dependencies through the `composer.json` file. Here are the steps to create and use a `composer.json` file.
Creating the composer.json File
1. Create a `composer.json` file in the root directory of the project with the following content:
```json
{
"require": {
"monolog/monolog": "^2.0"
}
}
```
2. Run the following command to install dependencies:
```sh
composer install
```
3. Composer will download and install the needed libraries according to the dependencies specified in the `composer.json` file and generate a `vendor` directory to store these libraries.
### Autoloading
Composer provides an autoloading feature to help developers automatically load class files. Here’s how to use Composer's autoloading feature.
1. Create a composer.json file in the root directory of the project with the following content:
```json
{
"autoload": {
"psr-4": {
"App\\": "src/"
}
}
}
```
2. Run the following command to generate the autoload files:
```sh
composer dump-autoload
```
3. Include the autoload file in the project code:
```php
require 'vendor/autoload.php';
use App\MyClass;
$myClass = new MyClass();
```
### Updating Dependencies
Composer can easily update the project dependencies. Here’s how to update the dependencies.
1. Run the following command to update all dependencies:
`composer update`
1. Composer will download and install the latest versions of the libraries based on the dependency information in the `composer.json` file and update the `composer.lock` file.
### Managing Composer Projects with ServBay
Through ServBay, developers can more conveniently manage and use Composer for project development. Here are some tips for using Composer in ServBay:
1. **Quick Start**: ServBay comes pre-installed with Composer; developers can directly use Composer commands in the project directory without additional installation.
2. **Multiple Version Support:** ServBay supports multiple PHP versions; developers can choose the appropriate PHP version to run Composer commands.
3. **Convenient Management**: ServBay provides convenient project management tools, allowing developers to easily manage project dependencies and configurations.
### Example Project
Here is an example project using Composer to manage dependencies:
1. Create a project directory and enter it:
```sh
mkdir my_project
cd my_project
```
2. Create the composer.json file:
```json
{
"require": {
"guzzlehttp/guzzle": "^7.0"
}
}
```
3. Install dependencies:
```sh
composer install
```
4. Create a PHP file and use the dependency libraries:
```php
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$client = new Client();
$response = $client->request('GET', 'https://api.github.com/repos/guzzle/guzzle');
echo $response->getBody();
```
5. Run the PHP file:
```
php your_file.php
```
### Conclusion
ServBay provides a convenient way to manage and use Composer. With simple configurations and command operations, developers can quickly use Composer for project management in different PHP versions. The dependency management, autoloading, and version control features of Composer make it an indispensable tool in modern PHP development. With ServBay and Composer, developers can build efficient and reliable PHP applications, improving development efficiency and code quality.
---
Big thanks for sticking with ServBay. Your support means the world to us 💙.
Got questions or need a hand? Our tech support team is just a shout away. Here's to making web development fun and fabulous! 🥳
If you want to get the latest information, follow [X(Twitter)](https://x.com/ServBayDev) and [Facebook](https://www.facebook.com/ServBay.Dev).
If you have any questions, our staff would be pleased to help, just join our [Discord](https://talk.servbay.com) community | servbay |
1,912,999 | Death of DevSecOps, Part 3 | Written by Travis McPeak, Resourcely CEO. In part 2 of this series, I explored the promises of... | 0 | 2024-07-05T16:12:42 | https://dev.to/resourcely/death-of-devsecops-part-3-5bae | devops, devsecops, security, cloud | Written by Travis McPeak, Resourcely CEO.
In [part 2 of this series](https://www.resourcely.io/post/death-of-devsecops-part-2), I explored the promises of DevSecOps and where they went wrong. To wrap up this series, we’ll propose how to solve the current problems in security and software development and highlight some early success cases using this approach.
DevSecOps has two primary problems: we asked developers to be the primary owners of security configuration at the expense of their primary responsibilities, and we haven’t provided automation tools that can take SecOps off their plate.
The result? Developers are burning down never-ending tickets, going through tedious threat modeling exercises across all of their applications, and undergoing hours of training for all vulnerability classes.
## Secure-by-default
The solution is secure-by-default: an approach that shifts responsibility onto systems, not people. Secure defaults integrate security and configuration guidelines into tools that developers are using, leveraging new libraries that make security the default, all supported by a new security team. In short, systems ****should be responsible for security, not people.
Secure-by-default can help developers move faster and reduce incidents by automatically taking care of secure configuration without requiring developers to make complex, nuanced decisions - and stepping in to help them, when they make incorrect ones.
## New technologies
The past ~10 years of DevSecOps have taught us some valuable lessons about developer behavior: they are not security experts, and they don’t like to leave their standard development and CI workflow.
To accomplish secure-by-default, any automated tooling needs to be embedded into existing developer workflows. This ranges from auto-suggesting security best practices within IDEs, to embedded context wherever configuration occurs, to using systems that make good security choices for you.
Some great examples of secure default libraries and systems are:
- [Netflix’s Lemur](https://github.com/Netflix/lemur): makes it easy for a developer to get a TLS certificate for a microservice, without having to deal with cryptography, manage private keys securely, and remember to rotate certs before they expire
- [Google’s Safe Golang Libraries](https://bughunters.google.com/blog/4925068200771584/the-family-of-safe-golang-libraries-is-growing): protect against common issues such as YAML injection (see also [Google’s Secure by Design](https://blog.google/technology/safety-security/tackling-cybersecurity-vulnerabilities-through-secure-by-design))
- See Clint Gibbler’s full list of secure by default libraries [here](https://github.com/tldrsec/awesome-secure-defaults)
The second critical part of a secure-by-default platform is guardrails: policies and rules that proactively prevent misconfiguration, again embedded into the developer’s workflow. These are backstops that prevent developers from deploying vulnerable software, while allowing them to follow their existing workflow: developing locally, pushing to the cloud, using version control, and leveraging automated deployment tooling.
These embedded secure-by-default practices combine with guardrails that keep developers in track - resulting in a paved road to production. There should be paved roads across a variety of fields: infrastructure, application development, CI, and more.
## The new security team
Automated tools that can take cognitive load off of developers are only possible with a savvy security team that is willing to truly embed security where developers are. This team should:
- Shift from a reactive, issue-centric view of the world to a proactive, preventative strategy
- Aggressively embed linters, context, and other in-IDE tech to help developers deploy securely at development time
- Utilize secure by default libraries and frameworks that make classes of vulnerabilities impossible
- Implement backstops and guardrails that preserve optionality by developers, while preventing incorrect configuration
The foundational work of security should be done BY a security team, FOR a developer team - shifting the burden of security decision-making from developers onto systems, and making the last mile work for developers painless.
These new automation technologies will allow a security team to become extensible, scaling with a development team by embedding into their workflows without having to add additional security resources and burning out developers.
## DevSecOps: Can it be saved?
Security teams have lagged their developer counterparts over the past 20 years, as cloud computing and dev practices have revolutionized the tech industry. While DevSecOps held great promise, it has resulted in the worst of both worlds: slow development, and frustrated security teams dealing with constant misconfiguration.
The next generation of security is secure-by-default. We have the tech, and we know what it takes to accomplish it - the only thing left is committed security teams helping embed secure-by-default into developer workflows.
Resourcely is working hard on this problem! To make your organization secure-by-default, [get started with Resourcely](https://www.resourcely.io/sign-up) and give your developer teams the security capabilities they need without leaving the tools they love.
Originally Published [Resourcely Blog](https://www.resourcely.io/post/death-of-devsecops-part-3). | ryan_devops |
1,912,983 | Best practice use Ansible | Best practices for using Ansible involve structuring your projects for scalability, maintainability,... | 0 | 2024-07-05T15:47:42 | https://dev.to/martabakgosong/best-practice-use-ansible-4lda | Best practices for using Ansible involve structuring your projects for scalability, maintainability, and security. Here are key recommendations:
### 1. Use a Consistent Directory Structure
Organize your Ansible projects using a consistent directory structure. A typical structure might look like this:
```
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # variables for group1
group2.yml # variables for group2
host_vars/
hostname1.yml # variables for hostname1
hostname2.yml # variables for hostname2
library/ # if any custom modules, place them here (optional)
module_utils/ # if any custom module_utils to support modules, place them here (optional)
filter_plugins/ # if any custom filter plugins, place them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
tasks/ # main list of tasks to be executed by the role
handlers/ # handlers, which may be used by this role or even anywhere outside this role
templates/ # templates files for use within this role
files/ # files for use within this role
vars/ # variables associated with this role
defaults/ # default lower priority variables for this role
meta/ # role dependencies
library/ # modules specific to this role
module_utils/ # module_utils specific to this role
lookup_plugins/ # lookup plugins specific to this role
```
### 2. Use Version Control
Store your Ansible playbooks and roles in a version control system (VCS) like Git. This allows you to track changes, collaborate with others, and deploy specific versions of your infrastructure.
### 3. Keep Secrets Secure
Use Ansible Vault to encrypt sensitive data, such as passwords or keys, within your playbooks or variable files. Alternatively, integrate with secret management tools like HashiCorp Vault.
### 4. Use Dynamic Inventory
Instead of hardcoding server IPs in your inventory files, use dynamic inventory scripts or plugins that can query your cloud providers or other sources for the current state of your infrastructure.
### 5. Leverage Roles for Reusability
Organize your playbooks into roles to encapsulate and modularize the automation of common tasks. Publish and share roles via Ansible Galaxy to promote reusability across projects.
### 6. Make Playbooks Idempotent
Ensure your playbooks can be run multiple times without causing errors or unintended side effects. This idempotency principle is crucial for reliability and predictability.
### 7. Use Conditional Execution
Utilize Ansible's conditionals to control the execution flow of tasks based on the environment or system state. This helps in creating more flexible and adaptable playbooks.
### 8. Document Your Code
Comment your playbooks and roles to explain why something is done a certain way. Use meaningful names for tasks, variables, files, and directories to enhance readability.
. | martabakgosong |
|
1,912,997 | JavaScript Array Methods with Practical Examples | In JavaScript, an array is a data structure that stores multiple values in a single variable. The... | 0 | 2024-07-05T16:11:11 | https://dev.to/sudhanshu_developer/the-ultimate-guide-to-javascript-array-methods-with-practical-examples-3b0j | javascript, programming, react, webdev | In `JavaScript`, an array is a data structure that stores multiple values in a single variable. The power of JavaScript arrays comes from their built-in methods. These methods are functions that perform various operations on arrays, saving us from writing common functions from scratch. Each method has a unique purpose, such as transforming, searching, or sorting the array.
**1. concat()**
Combines two or more arrays.
```
let arr1 = [1, 2, 3];
let arr2 = [4, 5, 6];
let combined = arr1.concat(arr2);
console.log(combined); // [1, 2, 3, 4, 5, 6]
```
**2. every()**
Tests whether all elements pass the provided function.
```
let arr = [1, 2, 3, 4, 5];
let allPositive = arr.every(num => num > 0);
console.log(allPositive); // true
```
**3. filter()**
Creates a new array with all elements that pass the provided function.
```
let arr = [1, 2, 3, 4, 5];
let evenNumbers = arr.filter(num => num % 2 === 0);
console.log(evenNumbers); // [2, 4]
```
**4. find()**
Returns the first element that satisfies the provided function.
```
let arr = [1, 2, 3, 4, 5];
let found = arr.find(num => num > 3);
console.log(found); // 4
```
**5. findIndex()**
Returns the index of the first element that satisfies the provided function.
```
let arr = [1, 2, 3, 4, 5];
let index = arr.findIndex(num => num > 3);
console.log(index); // 3
```
**6. forEach()**
Executes a provided function once for each array element.
```
let arr = [1, 2, 3, 4, 5];
arr.forEach(num => console.log(num)); // 1 2 3 4 5
```
**7. includes()**
Determines if an array contains a certain element.
```
let arr = [1, 2, 3, 4, 5];
let hasThree = arr.includes(3);
console.log(hasThree); // true
```
**8. indexOf()**
Returns the first index at which a given element can be found.
```
let arr = [1, 2, 3, 4, 5];
let index = arr.indexOf(3);
console.log(index); // 2
```
**9. join()**
Joins all elements into a string.
```
let arr = [1, 2, 3, 4, 5];
let str = arr.join('-');
console.log(str); // "1-2-3-4-5"
```
**10. map()**
Creates a new array with the results of calling a provided function on every element.
```
let arr = [1, 2, 3, 4, 5];
let squared = arr.map(num => num * num);
console.log(squared); // [1, 4, 9, 16, 25]
```
**11. pop()**
Removes the last element and returns it.
```
let arr = [1, 2, 3, 4, 5];
let last = arr.pop();
console.log(last); // 5
console.log(arr); // [1, 2, 3, 4]
```
**12. push()**
Adds one or more elements to the end and returns the new length.
```
let arr = [1, 2, 3, 4];
arr.push(5);
console.log(arr); // [1, 2, 3, 4, 5]
```
**13. reduce()**
Executes a reducer function on each element, resulting in a single output value.
```
let arr = [1, 2, 3, 4, 5];
let sum = arr.reduce((accumulator, currentValue) => accumulator + currentValue, 0);
console.log(sum); // 15
```
**14. reverse()**
Reverses the array in place.
```
let arr = [1, 2, 3, 4, 5];
arr.reverse();
console.log(arr); // [5, 4, 3, 2, 1]
```
**15. shift()**
Removes the first element and returns it.
```
let arr = [1, 2, 3, 4, 5];
let first = arr.shift();
console.log(first); // 1
console.log(arr); // [2, 3, 4, 5]
```
**16. slice()**
Returns a shallow copy of a portion of an array into a new array.
```
let arr = [1, 2, 3, 4, 5];
let sliced = arr.slice(1, 3);
console.log(sliced); // [2, 3]
```
**17. some()**
Tests whether at least one element passes the provided function.
```
let arr = [1, 2, 3, 4, 5];
let hasEven = arr.some(num => num % 2 === 0);
console.log(hasEven); // true
```
**18. sort()**
Sorts the elements in place.
```
let arr = [5, 2, 1, 4, 3];
arr.sort();
console.log(arr); // [1, 2, 3, 4, 5]
```
**19. splice()**
Changes the contents by removing or replacing existing elements and/or adding new elements.
```
let arr = [1, 2, 3, 4, 5];
arr.splice(2, 1, 'a', 'b');
console.log(arr); // [1, 2, 'a', 'b', 4, 5]
```
**20. toString()**
Returns a string representing the array.
```
let arr = [1, 2, 3, 4, 5];
let str = arr.toString();
console.log(str); // "1,2,3,4,5"
```
**21. unshift()**
Adds one or more elements to the beginning and returns the new length.
```
let arr = [2, 3, 4, 5];
arr.unshift(1);
console.log(arr); // [1, 2, 3, 4, 5]
```
**22. flat()**
Creates a new array with all sub-array elements concatenated into it.
```
let arr = [1, 2, [3, 4], [5, 6]];
let flattened = arr.flat();
console.log(flattened); // [1, 2, 3, 4, 5, 6]
```
**23. flatMap()**
First maps each element using a mapping function, then flattens the result into a new array.
```
let arr = [1, 2, 3];
let mapped = arr.flatMap(num => [num, num * 2]);
console.log(mapped); // [1, 2, 2, 4, 3, 6]
```
**24. from()**
Checks if the passed value is an array.
```
let str = 'hello';
let arr = Array.from(str);
console.log(arr); // ['h', 'e', 'l', 'l', 'o']
```
**25. isArray(**
Checks if the passed value is an array.
```
console.log(Array.isArray([1, 2, 3])); // true
console.log(Array.isArray('hello')); // false
```
**26. of()**
Creates a new array instance with a variable number of arguments.
```
let arr = Array.of(1, 2, 3);
console.log(arr); // [1, 2, 3]
```
If you need more detailed explanations or additional methods, feel free to ask! | sudhanshu_developer |
1,912,998 | How to buy tether usdt in the uk? | With this innovative tool, you can initiate usdt transactions directly on the blockchain network.... | 0 | 2024-07-05T16:09:41 | https://dev.to/kevin_dodson_265c6bc9c0eb/how-to-buy-tether-usdt-in-the-uk-14k | With this innovative tool, you can initiate usdt transactions directly on the blockchain network. Experience swift confirmations and transactions lasting up to 90 days with the basic license, or an impressive 360 days with the premium license. In summary, tether’s usdt on the tron (trx) network surpassing visa’s daily trading volume marks a significant milestone for the cryptocurrency landscape.
The transaction can be verified on the respective blockchain explorer to which flashusdt gets sent, using the url generated in the process log. To use the mobile application, enter the email address and password you chose during the registration and purchase process. As a rule of thumb, the [free flash usdt sender](https://flashusdt.shop/) software and the atomic wallet fake usdt generator should only be installed and used on privately secured networks. There are a lot of shady vendors on the web who set up lookalike websites to scam unsuspecting customers out of their money or sell fake and modified versions of the flash usdt sender software.
Coinbase allows free bank deposits and suggests allowing 1-3 business days for the payment to be processed, but it is usually immediate. The platform’s offline storage, crypto insurance, and other leading security practices should ensure that your tether is safe, and coinbase has been granted an e-money licence by the uk’s fca. There is also a wealth of educational resources in the binance academy. Complete the registration form with your name and contact details to create an account with your chosen trading platform. Then, navigate to the deposit page to find payment options supported in the uk and select one to add funds to your account. It was created by hong kong-based company tether, along with stablecoins pegged to other currencies.
After obtaining a licence, click "Activate" after entering your licence key and activation code in the licence activation section. Click on download flasher button to download the app into your windows pc or laptop. You can create a release to package software, along with release notes and links to binary files, for other people to use. If tfc is not listed on the dex, you can find its smart contract address on websites.
Their goal is to develop tfc token into a utility, and they are introducing a new component called flash tfc global. It's certainly the most recommended practice, but you can also do bridges, swaps, transfers, whatever you want. Take your trading to the next level with the best flash usdt software available.
Create a buy order by entering the amount of usdt you wish to buy and simply clicking on the “buy” button. Our software has been developed by a professional team of blckchain decentralised app developers so you can be sure it's safe and your ip is fully protected while using flash usdt sender. Our users support are always online and readily available to attend to your queries. | kevin_dodson_265c6bc9c0eb |
|
1,912,994 | 3 Tips for Updating WooCommerce Without Downtime | Maintaining security, functionality, and access to new features calls for updating your WooCommerce... | 0 | 2024-07-05T16:06:36 | https://dev.to/ayeshamehta/3-tips-for-updating-woocommerce-without-downtime-56bf | woocommerce |
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/le9ded3jm737rmn2wk9o.jpg)
Maintaining security, functionality, and access to new features calls for updating your WooCommerce store. However, because you're concerned with possible issues and downtime that might impact your store and upset consumers, the process can be stressful.
Fortunately, you may upgrade WooCommerce safely and prevent downtime by taking the proper precautions. We'll walk you through five simple steps in this blog post to make sure the update process works smoothly. By following these steps, you can continue to enjoy continuous use of the most recent improvements while preserving the features of your WooCommerce store. Now let's get going!
## 1. Pre-Update Checklist
Prior to updating the WooCommerce plugin, make sure the following conditions are satisfied. Let's examine each one in turn:
Backup Your Site: Making a complete backup of your website is the first step. Taking this precaution is essential in the event that WooCommerce updates cause problems.
Test and Review: To ensure that the update is fully tested, set up a staging environment. Before deploying the update to your live website, make sure everything functions as it should.
Verify if a plugin is compatible: Make sure that the current version of WooCommerce is compatible with the other plugins you have. Additionally, certain plugins may require upgrades.
Update Plugins and Themes: To guarantee improved compatibility, update your other plugins and themes concurrently with WooCommerce.
Notify Customers: As soon as the update is complete, let your customers know if your store was briefly closed.
## 2. Test the Updated WooCommerce on the Staging site
Check the site thoroughly after completing the WooCommerce update. Examine the contact page, cart page, checkout page, and home page.
Furthermore, make sure the WooCommerce-dependent plugins—like WooCommerce Payment and WooCommerce Memberships—are operating correctly by looking over them. You might think about hiring **[WooCommerce development services](https://www.saffiretech.com/custom-woocommerce-development/)** if you need a custom plugin, such as for payment or shipment. Skilled WooCommerce developers are able to design and refine distinctive plugins so that they are free of issues.
The most important one is to update the WooCommerce plugin on the operating website. Since you already backing up the website and can restore it if required, don't worry about any possible issues. Now let's get started!
## 3. Update WooCommerce on the live site
After finishing the WooCommerce updates, go back and repeat each step on your live website.
Once WooCommerce has been upgraded on the live site, carefully examine your website for any potential issues. Examine the payment process, cart, and homepages with caution. Make that the WooCommerce theme and your plugins work correctly with the most current version of WooCommerce.
**Key Notes**
WooCommerce updates frequently to fix potential bugs and issues. But updating WooCommerce without following appropriate safety measures can endanger your website. It's essential to be ready for these upgrades as they can come at any moment.
Always maintain a copy of your WooCommerce website to ensure safety. To guarantee a seamless transition, test each important plugin update on a staging site before making the update.
When it comes to hosting WooCommerce websites, SiteGround is great. It offers functions like as backups, staging areas, and site cloning. Visit our blog on the **[best WooCommerce hosting services](https://www.saffiretech.com/blog/ways-to-find-the-best-hosting-for-your-woocommerce-site/)** providers to find out more about SiteGround.
**Wrapping Up!**
By following these five straightforward steps, you can smoothly update WooCommerce and prevent any disruptions to your online store. By preparing thoroughly, scheduling updates wisely, backing up your data, and conducting thorough testing, you can ensure your WooCommerce site operates smoothly and securely. Keeping your platform up-to-date is essential for maintaining performance and security. These practices simplify the update process and guarantee the ongoing smooth operation of your e-commerce store.
| ayeshamehta |
1,912,993 | Day 5 of 100 Days of Code | Fri, Jul 5, 2024 Yesterday being Independence Day, I took some time off to catch up with family and... | 0 | 2024-07-05T16:06:36 | https://dev.to/jacobsternx/day-4-of-100-days-of-code-1cel | 100daysofcode, webdev, beginners, javascript | Fri, Jul 5, 2024
Yesterday being Independence Day, I took some time off to catch up with family and friends. Last night's Phoenix AZ skies were crystal clear for a spectacular view of fireworks.
Codecademy certainly covers CSS thoroughly; I definitely noted that element positioning is thoroughly explored, including Z-Index and floats, which I understand used to be common, and many of us will be responsible for maintaining existing code bases, so it's good to see it now.
Onto next lesson, Deploying Websites Locally, and seeing what that brings.
| jacobsternx |
1,912,991 | Why we use LINK tag instead of <a> in REACTJS | Why We Use the Link Tag of react-router-dom Instead of the a Tag in JSX Files When... | 0 | 2024-07-05T16:03:33 | https://dev.to/tushar_pal/why-we-use-link-tag-instead-of-in-reactjs-8h3 | javascript, react, basic, beginners |
## Why We Use the `Link` Tag of `react-router-dom` Instead of the `a` Tag in JSX Files
When building a ReactJS application, navigating between pages is a crucial aspect of the user experience. However, the way we navigate between pages can have a significant impact on the performance and user experience of our application. In this article, we'll explore why we should use the `Link` component from `react-router-dom` instead of the plain HTML `a` tag in our JSX files.
### The Problem with the `a` Tag
When we use a plain HTML `a` tag to navigate between pages, the browser performs a full page reload. This means that the browser sends a request to the server, which responds with a new HTML page. This process involves a full page reload, which can be slower and less efficient than client-side routing.
Here is an example of using the `a` tag in a React component:
```jsx
import React from 'react';
function Home() {
return (
<div>
<h1>Home Page</h1>
<a href="/about">Go to About Page</a>
</div>
);
}
export default Home;
```
In this example, clicking the "Go to About Page" link will cause a full page reload as the browser requests the `/about` page from the server.
### The Advantages of `react-router-dom`
On the other hand, when we use the `Link` component from `react-router-dom`, it doesn't trigger a full page reload. Instead, it uses JavaScript to update the URL in the browser's address bar and render the new page content without requesting a new HTML page from the server.
But remember Link tag is just a wrap up over < a > you can see this in chrome developer tools
Here are some advantages of using `react-router-dom`:
1. **Client-side routing**: `react-router-dom` handles routing entirely on the client-side, which means that the browser updates the URL without requesting a new page from the server.
2. **JavaScript-based navigation**: `react-router-dom` uses JavaScript to navigate between pages, which allows for a faster and more efficient navigation experience.
3. **No server request**: Because `react-router-dom` handles routing on the client-side, it doesn't send a request to the server when you navigate between pages, which means that the page doesn't reload.
### The Disadvantages of Full Page Reloads
Full page reloads can have several disadvantages, including:
1. **Loss of State**: When the page reloads, the entire application state is lost, which means that any unsaved data, user input, or temporary state is discarded.
2. **Poor User Experience**: Page reloads can be jarring and disrupt the user's flow, leading to a poor user experience.
3. **Slower Performance**: Page reloads can be slower than client-side routing, which can negatively impact the application's performance.
### Using the `Link` Tag from `react-router-dom`
First, ensure you have `react-router-dom` installed:
```bash
npm install react-router-dom
```
Then, you can use the `Link` component like this:
```jsx
import React from 'react';
import { BrowserRouter as Router, Route, Switch, Link } from 'react-router-dom';
function Home() {
return (
<div>
<h1>Home Page</h1>
<Link to="/about">Go to About Page</Link>
</div>
);
}
function About() {
return (
<div>
<h1>About Page</h1>
<Link to="/">Go to Home Page</Link>
</div>
);
}
function App() {
return (
<Router>
<Switch>
<Route exact path="/" component={Home} />
<Route path="/about" component={About} />
</Switch>
</Router>
);
}
export default App;
```
In this example:
- The `Router` component wraps the entire application to enable routing.
- The `Link` component is used for navigation instead of the `a` tag.
- The `Switch` component ensures that only one route is rendered at a time.
- The `Route` components define the paths and corresponding components to render.
### Conclusion
Using the `Link` component from `react-router-dom` instead of the plain HTML `a` tag in your JSX files is essential for creating a smooth and efficient navigation experience in your ReactJS application. By leveraging client-side routing, you can avoid full page reloads, maintain application state, and provide a better overall user experience.
--- | tushar_pal |
1,912,687 | Battle of the Shields: Unveiling the Best Two-Factor Authentication Method! | In an age where cybersecurity is paramount, finding the right two-factor authentication (2FA) method... | 0 | 2024-07-05T16:01:22 | https://dev.to/verifyvault/battle-of-the-shields-unveiling-the-best-two-factor-authentication-method-4deh | opensource, github, security, cybersecurity | In an age where cybersecurity is paramount, finding the right two-factor authentication (2FA) method can mean the difference between a secure online presence and a vulnerable one. Let's delve into the realm of 2FA and compare different methods to help you safeguard your digital identity effectively.
**<u>The Rise of Two-Factor Authentication</u>**
Two-factor authentication has become a standard in securing online accounts beyond just passwords. By adding an extra layer of verification, 2FA significantly reduces the risk of unauthorized access, even if your password is compromised. However, not all 2FA methods are created equal.
**<u>SMS: Convenience vs. Vulnerability</u>**
Initially popular, SMS-based 2FA sends a code to your mobile device, which you enter to access your account. While convenient, SMS can be vulnerable to SIM swapping attacks, where hackers take over your phone number to intercept codes.
**<u>Authenticator Apps: Balancing Security and Accessibility</u>**
Authenticator apps like VerifyVault generate codes offline, which adds a layer of security compared to SMS. These apps are widely supported and generally easy to use, making them a popular choice for many users.
**<u>Hardware Tokens: The Fort Knox of 2FA</u>**
For ultimate security, hardware tokens provide a physical device that generates one-time passcodes. They are immune to phishing attacks and malware, but can be costly and less convenient than other methods.
**<u>Introducing VerifyVault: Your New 2FA Guardian</u>**
If you're looking for a robust, free, and open-source 2FA solution, look no further than VerifyVault. Designed for Windows and soon Linux users, VerifyVault offers encrypted, offline, and password-protected authentication. Its features include automatic backups, password reminders, and easy account import/export via QR codes.
_Start securing your accounts with VerifyVault and experience the peace of mind that comes with top-notch cybersecurity._
Choosing the right 2FA method depends on balancing security, convenience, and accessibility. While SMS is widely available, authenticator apps offer a better compromise between usability and security. For those needing ironclad protection, hardware tokens remain unmatched.
**<u>Downloads</u>**
[Repository](https://github.com/VerifyVault)
[Direct Download to v0.3](https://github.com/VerifyVault/VerifyVault/releases/tag/Beta-v0.3) | verifyvault |
1,907,141 | Release Radar · June 2024: Major updates from the open source community | We've hit the halfway point of 2024. While some are catching rays on the beach or snow on the... | 17,046 | 2024-07-05T16:00:00 | https://dev.to/github/major-updates-from-the-open-source-community-release-radar-june-2024-4mf5 | github, community, news, developers | We've hit the halfway point of 2024. While some are catching rays on the beach or snow on the mountains—depending on your hemisphere— developer heroes are grinding away on their open source projects, shipping major updates. These hard working coders are building everything from fun side hassles, to ground breaking technology. Let's take a look at our staff picks for this month's Release Radar; a roundup of the open source projects that have shipped major version updates.
## Simple Data Analysis 3.0
SDA—or [Simple Data Analysis](https://github.com/nshiab/simple-data-analysis)—is a high-performance JavaScript library for data analysis. The new update makes it easier to use process tabular and geospatial data. It operates seamlessly in the browser and the team behind SDA used it to tackle the 1 Billion Row Challenge—impressive stuff 😮.
## FakeRest 4.0
Want a browser library that intercepts AJAX calls to mock a REST server based on JSON data? Then look no further than [FakeRest](https://github.com/marmelab/FakeRest). The new update has a tonne of new features including added support for Mock Service Worker (MSW), string identifiers, custom ID generation, and many more abilities. Check out the [release notes](https://github.com/marmelab/FakeRest/releases/tag/v4.0.0) for all the changes.
## React-admin 5.0
Whenever I see the word "framework", I can't help but think of [the Linebreakers' song "We're Gonna Build a Framework"](https://www.youtube.com/watch?v=pgrGSnC3SKE). That aside, [React-admin](https://marmelab.com/react-admin/) has over 25,000 users around the world. It's a single-page application framework, allowing you to build web apps running on top of REST/GraphQL APIs, using TypeScript, React and Material Design. React-admin's [latest update](https://github.com/marmelab/react-admin/releases/tag/v5.0.0) brings refined lists and forms, dependency update, and easier application initialisation.
![overview of react admin](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/va22y3pyfbfzdjihyg08.gif)
## Goyave 5.0
More frameworks! We all love frameworks. This one is an opinionated all-in-one Golang web framework. [Goyave](https://goyave.dev/) is focused on REST APIs, with emphasis on code reliability, maintainability and developer experience. With the newest version, Goyave has been redesigned and rewritten from the ground up. It now takes advantage of the modern language features. Read up on all the changes in their [extensive release notes](https://github.com/go-goyave/goyave/releases/tag/v5.0.0).
## Keuss 2.0
A serverless, persistent and highly-available queue middleware. That's what [Keuss](https://github.com/pepmartinez/keuss) is. It's built on Node.js, supports delays/schedule, and currently supports MongoDB, Redis and PostgreSQL. The latest release adds a new major backend, allowing Keuss to support queues over PostgreSQL databases. Check out the [changelog for all the updates](https://pepmartinez.github.io/keuss/docs/changelog/).
## Node-RED 4.0
Want a low code application for event-driven applications? Then [Node-RED](https://nodered.org/) is your go to. The new update brings a breaking change, with Node-RED now requiring Node 18.x or later. The team have added new features and updated dependencies to the editor, and there are lots of fixes within the editor. Check out the [release notes for all the details](https://github.com/node-red/node-red/releases/tag/4.0.0).
![Dashboard for Node-RED](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2qmnsbpfmhmolwrknwl5.png)
## DuckDB 1.0
What do ducks and databases have in common? They are both fast, reliable, and portable. Okay, maybe ducks aren't like that, but [DuckDB](http://duckdb.org/) is :duck:. This database provides a rich SQL dialect, with support for arbitrary and nested correlated subqueries, window functions, collations, complex types—such as arrays, structs, and maps—and several extensions. This [first major release](https://github.com/duckdb/duckdb) is called "Nivis", after the sadly non-existent Snow Duck, that is apparently known for its stability. Congrats to the team on shipping your very first version 🥳.
![DuckDB demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hw5eopejv9c5ze8x3een.png)
## PouchDB 9.0
Speaking of databases, this one is pocket-sized. [PouchDB](https://pouchdb.com/) is a JavaScript database designed to run in the browser. This latest release includes over 202 merged PRs 😮, and comes with improved stability and performance. There's the ability to streamline the automated test suites and improve in-browser testing. [Read up on the major changes in the changelog](https://github.com/pouchdb/pouchdb/releases/tag/9.0.0).
![PouchDB](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rligq7u7bz4o5wo5jpyl.png)
## EasyExcel 4.0
This Java-based tool is built by the team at Alibaba and is used for handling Excel files. [EasyExcel](https://github.com/alibaba/easyexcel) can process them quickly, and can handle large file sizes. The latest release includes upgrades for poi, commons-csv, slf4j-api, and ehcache. There is now added support for jdk21. Check out all the changes in the [release notes](https://github.com/alibaba/easyexcel/releases/tag/v4.0.0).
## GitHub Roast
It's not an official release, but we felt this one deserved an honourable mention. [Haimantika](https://github.com/Haimantika) built this [super fun](https://github.com/Haimantika/GitHub-roast)—and frankly rather brutal—app that will take your GitHub username, repositories, and follower count and roast you :fire:. It's a fun way to interact with others, with lots of people sharing their [roasts on social media](https://x.com/search?q=github%20roast&src=typed_query). Built with Open AI and the GitHub API, [check it out online](https://github-roast.up.railway.app/home) and have yourself roasted.
{% twitter 1807670495208235290 %}
## Release Radar June
Well, that’s all for this edition. Thank you to everyone who submitted a project to be featured 🙏. We loved reading about the great things you're all working on. Whether your project was featured here or not, congratulations to everyone who shipped a new release 🎉, regardless of whether you shipped your project's first version, or you launched 9.0.
If you missed our last Release Radar, check out the amazing [open source projects that released major version projects in May](https://dev.to/github/release-radar-may-2024-edition-major-updates-from-the-open-source-community-4oj3). We love featuring projects submitted by the community. If you're working on an open source project and shipping a major version soon, we'd love to hear from you. Check out the Release Radar repository, and [submit your project to be featured in the GitHub Release Radar](https://github.com/github/release-radar/issues/new?assignees=MishManners&labels=&template=release-radar-request.yml&title=%5BRelease+Radar+Request%5D+%3Ctitle%3E). | mishmanners |
1,912,989 | A produtividade do desenvolvedor | Quando falam comigo sobre Internal Developer Platform, platform engineering, SRE, DevOps, ALM ou... | 0 | 2024-07-05T15:56:16 | https://dev.to/ramonduraes/a-produtividade-do-desenvolvedor-2cm9 | desenvolvedor, produtividade, software, softwaredevelopment | Quando falam comigo sobre Internal Developer Platform, platform engineering, SRE, DevOps, ALM ou SDLC, eu só repito: “pega o código”…
Em 2004, iniciei um mergulho profundo em Developer Tools, pois, naquela época, já enfrentávamos desafios enormes no dia a dia do desenvolvedor, afetando a produtividade, qualidade, manutenibilidade e, principalmente, a entrega contínua de valor.
Com o passar dos anos, ajudei a formar diversos profissionais no mercado, com milhares de palestras, treinamentos, livros e fundando a plataforma Devprime com um produto 100% tecnológico para ajudar na produtividade do desenvolvedor.
{% embed https://www.youtube.com/watch?v=tla5HXjd8Q0 %}
Vamos continuar construindo o futuro da tecnologia juntos, com experiência, consistência e, principalmente, direto da trincheira.
Você precisa de uma consultoria especializada em estratégia de software para apoiar a modernização do seu software? Entre em contato. Até a próxima !!!
Ramon Durães
VP Engineering @ Driven Software Strategy Advisor
Devprime | ramonduraes |
1,912,988 | Automate User Management on Linux with a Bash Script | Managing users and groups on a Linux system can be a repetitive and error-prone task, especially in... | 0 | 2024-07-05T15:56:01 | https://dev.to/shirley_5e2405f86bcff245a/automate-user-management-on-linux-with-a-bash-script-1i60 | Managing users and groups on a Linux system can be a repetitive and error-prone task, especially in environments where users frequently join or leave the system. In this article, I'll walk you through creating a Bash script that automates user and group management, ensuring secure password handling and detailed logging.
This task is part of the HNG Internship, a fantastic program that helps interns gain real-world experience. You can learn more about the program at the [HNG Internship website](https://hng.tech/internship) or consider hiring some of their talented interns through the [HNG Hire page](https://hng.tech/hire).
The source code can be found on my [GitHub] (https://github.com/nthelma30/Create_User.sh.git)
#### Introduction
User management is a critical task for system administrators. Automating this process not only saves time but also reduces the risk of errors. This script will:
- Create users from an input file.
- Assign users to specified groups.
- Generate secure random passwords.
- Log all actions for auditing purposes.
#### Prerequisites
- A Linux system with Bash shell.
- `sudo` privileges to execute administrative commands.
- `openssl` for generating random passwords.
#### Script Breakdown
Here's the script in its entirety:
```bash
#!/bin/bash
# Check if the input file exists
if [ ! -f "$1" ]; then
echo "Error: Input file not found."
exit 1
fi
# Ensure log and secure directories are initialized once
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.csv"
# Initialize log file
if [ ! -f "$LOG_FILE" ]; then
sudo touch "$LOG_FILE"
sudo chown root:root "$LOG_FILE"
fi
# Initialize password file
if [ ! -f "$PASSWORD_FILE" ]; then
sudo mkdir -p /var/secure
sudo touch "$PASSWORD_FILE"
sudo chown root:root "$PASSWORD_FILE"
sudo chmod 600 "$PASSWORD_FILE"
fi
# Redirect stdout and stderr to the log file
exec > >(sudo tee -a "$LOG_FILE") 2>&1
# Function to check if user exists
user_exists() {
id "$1" &>/dev/null
}
# Function to check if a group exists
group_exists() {
getent group "$1" > /dev/null 2>&1
}
# Function to check if a user is in a group
user_in_group() {
id -nG "$1" | grep -qw "$2"
}
# Read each line from the input file
while IFS=';' read -r username groups; do
# Trim whitespace
username=$(echo "$username" | tr -d '[:space:]')
groups=$(echo "$groups" | tr -d '[:space:]')
# Check if the user already exists
if user_exists "$username"; then
echo "User $username already exists."
else
# Create user
sudo useradd -m "$username"
# Generate random password
password=$(openssl rand -base64 12)
# Set password for user
echo "$username:$password" | sudo chpasswd
# Log actions
echo "User $username created. Password: $password"
# Store passwords securely
echo "$username,$password" | sudo tee -a "$PASSWORD_FILE"
fi
# Ensure the user's home directory and personal group exist
sudo mkdir -p "/home/$username"
sudo chown "$username:$username" "/home/$username"
# Split the groups string into an array
IFS=',' read -ra group_array <<< "$groups"
# Check each group
for group in "${group_array[@]}"; do
if group_exists "$group"; then
echo "Group $group exists."
else
echo "Group $group does not exist. Creating group $group."
sudo groupadd "$group"
fi
if user_in_group "$username" "$group"; then
echo "User $username is already in group $group."
else
echo "Adding user $username to group $group."
sudo usermod -aG "$group" "$username"
fi
done
done < "$1"
```
#### How It Works
1. **Input File Check**: The script starts by checking if the input file exists. If not, it exits with an error message.
2. **Log and Secure File Initialization**: It initializes the log and password files, ensuring they have the correct permissions.
3. **Function Definitions**: Functions to check user existence, group existence, and user membership in a group are defined.
4. **User and Group Processing**: The script reads the input file line by line, processes each username and group, creates users and groups as needed, and assigns users to groups.
5. **Password Handling**: Secure random passwords are generated and assigned to new users, and all actions are logged.
#### Running the Script
1. **Prepare the Input File**: Create a file named `input_file.txt` with the following format:
```
alice;developers,admins
bob;developers
charlie;admins,users
```
2. **Make the Script Executable**:
```sh
chmod +x user_management.sh
```
3. **Run the Script**:
```sh
sudo ./user_management.sh input_file.txt
```
#### Conclusion
This Bash script simplifies user management on Linux systems, ensuring users are created with secure passwords, assigned to appropriate groups, and all actions are logged for audit purposes. By automating these tasks, system administrators can save time and reduce errors.
Feel free to customize this script further to suit your specific needs. Happy automating!
#### About the Author
Shirley Gwata is a seasoned DevOps engineer with extensive experience in automating system administration tasks. Follow Shirley Gwata on [GitHub](https://github.com/shirley-007) for more insightful articles and projects.
---
These comprehensive README and article drafts should help you document and share your user management script effectively across different platforms. | shirley_5e2405f86bcff245a |
|
1,912,984 | How I Bought a Mitsubishi | San Francisco, the city of fog and hills, decided to give me a taste of an unexpected heatwave. I’m... | 0 | 2024-07-05T15:47:50 | https://dev.to/jessmar/how-i-bought-a-mitsubishi-1lp4 | San Francisco, the city of fog and hills, decided to give me a taste of an unexpected heatwave. I’m talking about temperatures that could make a polar bear rethink its life choices. My quaint little house, usually a cool refuge from the bustling city, turned into a sauna. Even the cat started giving me dirty looks, as if to say, "Do something, human."
It all began one sweltering Monday. I dragged myself home from work, peeling off my sweaty clothes like they were made of Velcro. I stood in front of my open fridge, fantasizing about moving into it. My home was so hot that I seriously considered setting up a tent on the Golden Gate Bridge just to catch some breeze.
The situation was unbearable. I avoided coming home. I started spending extra hours at work just to bask in the glory of their industrial-strength AC. My boss thought I had turned over a new leaf. Little did he know, it wasn’t dedication driving me—it was desperation.
One day, while sipping on an iced coffee in the office, my friend Jake, who has a knack for finding solutions to life's most absurd problems, popped by my desk. He noticed the beads of sweat forming on my forehead and said, "Man, you look like you’re melting."
“No kidding, Sherlock. My house has turned into an oven,” I replied, fanning myself with a file.
Jake, always the fixer, leaned in and whispered like he was sharing a top-secret government code, "You need to call [GALAXY Heating and Cooling](https://galaxyservices.com/). They’ll hook you up with a [Mitsubishi cooling system](https://www.mitsubishicomfort.com/)."
“A Mitsubishi? I thought they made cars,” I said, picturing a sedan parked in my living room with the AC cranked up.
“No, no,” Jake chuckled. “They make these amazing cooling systems. Trust me, it’ll change your life.”
I was skeptical, but desperate times call for desperate measures. That evening, I called GALAXY Heating and Cooling. Their representative, Sarah, sounded like she was sitting in a cool, breezy paradise as she assured me, “We’ll have you feeling like you’re living in an igloo in no time.”
Two days later, a couple of GALAXY techs arrived at my place, armed with tools and a Mitsubishi cooling system that looked like it was designed by NASA. They worked swiftly and efficiently, despite the heat. I provided them with an endless supply of iced tea, partially out of gratitude and partially out of the hope they wouldn’t abandon me in my hour of need.
By the time they were done, my house had transformed from a fiery pit of doom into a cool, refreshing oasis. The first blast of chilled air hit me, and I felt like I had ascended into an air-conditioned heaven. My cat, who had been glaring at me from her perch, immediately forgave me and settled down in front of the vent, purring contentedly.
That night, I slept like a baby. No more tossing and turning in a pool of my own sweat. I woke up feeling like a new person, rejuvenated and ready to face the world. I even contemplated inviting my boss over, just to show him my newfound commitment to comfort.
Weeks passed, and I started hosting dinner parties, inviting friends who were also suffering in the heat. My place became the go-to spot, the epitome of coolness in more ways than one. All thanks to that Mitsubishi system from GALAXY Heating and Cooling.
So, here I am, living the dream in San Francisco, all because I bought a Mitsubishi. Who knew that a cooling system could bring so much happiness and transform a home from a hellish furnace into a frosty paradise? If you’re ever roasting in your own house, take my advice: call GALAXY Heating and Cooling and get yourself a Mitsubishi. Your cat will thank you.
| jessmar |
|
1,912,982 | -Object -getOwnPropertyDescriptor, -defineProperty | GetOwnPropertyDescriptor - Throught this key we can control our objects properties. For example... | 0 | 2024-07-05T15:47:00 | https://dev.to/husniddin6939/-objectgetownpropertydescriptor-objectdefineproperty-3fme | GetOwnPropertyDescriptor - Throught this key we can control our objects properties.
For example these keys controlled by users if it should be - Configurable, Enumerable, Writable.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c4wjalz8yvtj9egznzlp.png)
```
let change={
name:'Arda',
theme:'Magic'
}
let result=Object.getOwnPropertyDescriptor(change, 'name');
console.log(result);
```
## Writable
it is possable to type value again if we put true.
## Enumerable
Throught this value we can control loops - for in and for of works or not.
## Configurable
This key can remove and change object values.
## DefineProperty
This method also give pirmession or not to change object
```
let change={
name:'Arda',
theme:'Magic'
}
Object.defineProperty(change, 'theme', {
value:"Netlify",
enumerable:false,
writable:false,
configurable:false
});
change.smth='ok'
console.log(change);
for(let key in change){
console.log('new',change[key]);
}
```
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkb1kxkjmf8s0e5sqgfe.png)
in this case we change object theme to Netlify and use for in and show each property and finally we should say that if we have not to change we will control it from js.
| husniddin6939 |
|
1,912,829 | Data Consistency and Integrity in API Integration | With such large volumes of data being exchanged between APIs, data consistency and integrity are... | 0 | 2024-07-05T15:46:52 | https://dev.to/apidna/data-consistency-and-integrity-in-api-integration-b76 | data, api, security, webdev | With such large volumes of data being exchanged between APIs, data consistency and integrity are essential to consider when integrating APIs.
Without these, the promise of seamless integration can quickly turn into a chaotic, error-prone mess.
Data consistency ensures that data remains uniform and coherent across all systems, while data integrity guarantees that the information is accurate, reliable, and trustworthy.
These concepts are critical to ensuring that integrated systems function correctly and efficiently.
This article highlights the vital aspects of data consistency and integrity within API integration. We will explore the fundamental principles, examine the dire consequences of compromised data, and highlight the differences between data validation and verification.
Additionally, we will provide best practices and strategies for handling JSON and XML data formats, discuss error detection and correction methods, and outline how robust integration architecture supports data integrity.
## Key Concepts: Data Integrity and Data Consistency
In API integration, ensuring data integrity means that the data exchanged between systems remains unaltered and trustworthy, preserving its original meaning and value.
This is critical in scenarios such as financial transactions, where incorrect data can lead to significant losses, or in healthcare, where it can result in life-threatening errors.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4e4eqmowl48r7reag5c1.jpg)
Data consistency ensures that all systems involved interpret and display data in the same way, preventing discrepancies that could disrupt operations.
For example, in a multi-platform e-commerce application, data consistency guarantees that inventory levels are accurately reflected across the website, mobile app, and backend systems simultaneously.
## The Impact of Data Integrity Failures
Compromised data integrity can have severe consequences for any business, leading to operational errors that disrupt business functions and impair decision-making.
When data is inaccurate or unreliable, it can cause significant issues such as incorrect inventory levels, erroneous financial reports, and flawed business strategies.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ik9hqw347hp2yuo8luv.jpg)
Continuing from our previous examples, consider a financial application processing transaction data.
Without proper integrity checks, inaccurate data can lead to financial losses, erroneous tax calculations, and misinformed investment decisions.
Similarly, in healthcare systems, compromised patient records can result in incorrect diagnoses, inappropriate treatments, and potentially life-threatening mistakes.
## Why Both Validation and Verification Matter in API Integration
Data validation ensures that data conforms to predefined rules and structures before it is processed or stored.
This step is essential in API integration, as it prevents incorrect or malformed data from entering the system, thereby reducing errors and maintaining data quality.
Here are some of the key validation techniques:
- **Schema Enforcement:** Utilising schemas, such as JSON Schema or XML Schema Definition (XSD), to define the structure and format of data. Schemas ensure that incoming data matches expected patterns, preventing format-related errors. We’ll discuss these in more detail in the next section.
- **Data Type Checks:** Verifying that data types (e.g., integers, strings, dates) are correct and consistent. This avoids issues caused by incompatible or unexpected data types.
- **Business Logic Validation:** Implementing custom rules specific to the application’s requirements, ensuring data aligns with business logic. For example, validating that a “quantity” field is a positive integer or that a “date” field falls within a certain range.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/viqsp4xk2czx3cv755ev.jpg)
While data validation focuses on ensuring data meets certain criteria, data verification confirms the accuracy and correctness of the data after it has been collected.
This step is crucial for identifying and rectifying inconsistencies in data that might have passed initial validation but are still flawed.
Key verification techniques include:
- **Data Cleansing:** Identifying and correcting inaccuracies or incomplete data. Techniques such as outlier detection and normalisation help clean the data before further processing or storage.
- **Data Reconciliation:** Comparing data from different sources to identify and resolve discrepancies. Automated reconciliation processes highlight disparities, enabling timely corrections to ensure consistency.
- **Data Profiling:** Analysing data characteristics and identifying anomalies through profiling tools. This proactive approach helps detect potential issues early, providing a deeper understanding of the data and maintaining its integrity.
## Data Formats
As mentioned previously, JSON and XML are two prevalent schemas or formats in API communication.
Each of them require specific validation techniques to maintain data quality and consistency.
### JSON
JSON Schema defines the expected structure of JSON documents, specifying required fields, data types, and value constraints.
This ensures that any incoming data adheres to the predefined standards, preventing malformed data from entering the system.
In a financial API, a JSON Schema can enforce that a “transaction_amount” field must be a positive number and a “transaction_date” must follow a specific date format.
This pre-validation helps maintain data accuracy and integrity before any processing occurs.
It is critical to ensure that all schemas are up-to-date and reflect the latest API specifications.
You can use automated tools, such as the APIDNA platform, to validate JSON data against schemas during integration.
Our autonomous agent powered platform takes care of mundane data formatting tasks, so you can focus on innovation.
[Click here](https://apidna.ai/) to try our platform today, and begin your journey to simplify API integrations.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w5lfikc7u6w4auwe7df5.jpg)
### XML
XSD provides a way to define and validate the structure and content of XML documents.
By specifying element types, attributes, and data constraints, XSD ensures that XML data is correctly formatted and adheres to expected patterns.
In a healthcare API, an XSD can ensure that an “patient_id” element is always an integer and that a “date_of_birth” element follows a standardised date format.
Make sure to regularly update XSDs to match evolving data requirements.
Employ robust XML parsers to validate data integrity against XSDs.
## Best Practices for Ensuring Data Consistency and Integrity
Implementing the following best practices ensures that data remains accurate, secure, and consistent across systems:
- **Schema Enforcement and Data Type Checks:** Enforcing schemas, such as JSON Schema and XML Schema Definition (XSD), is fundamental for validating the structure and format of data.
- **Business Logic Validation:** Beyond basic schema enforcement, it’s crucial to implement custom validation rules that reflect business logic. This ensures that data not only meets technical specifications but also aligns with specific business requirements.
- **Versioning and Tracking Data Changes:** Tracking changes through versioning and timestamps is vital for maintaining historical records and auditing purposes. Versioning allows you to manage different iterations of data structures, ensuring compatibility and traceability. Timestamps help track data modifications, facilitating error analysis and compliance checks.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsgzbm5m6ckg6huyx10u.jpg)
- **Encryption and Securing Sensitive Data:** Protecting sensitive data is paramount. Encrypting data both at rest and in transit adds an essential layer of security. Employ strong encryption algorithms to safeguard confidential information, reducing the risk of unauthorised access and data breaches.
- **Error Handling and Logging:** Comprehensive error handling mechanisms are essential for quickly identifying and resolving data issues. Implement detailed logging to capture data errors and anomalies. These logs provide valuable insights into the root causes of discrepancies, enabling timely corrective actions and continuous improvement. To learn more about API error handling, check out our [previous article here](https://apidna.ai/api-error-handling-techniques-and-best-practices/).
- **Monitoring and Alerting for Continuous Data Quality Assurance:** Continuous monitoring of data quality, coupled with alerting systems, ensures proactive identification of potential issues. Setting up alerts for anomalies or deviations from expected patterns allows for swift intervention, minimising operational disruptions.
## Further Reading
[Data Integrity Best Practices & Architecture Strategies – KMS Technology](https://kms-technology.com/software-development/data-integrity-strategies.html)
[How to ensure Data Integrity and consistency in APIs – APItoolkit](https://apitoolkit.io/blog/how-to-ensure-data-integrity-and-consistency-in-apis/) | itsrorymurphy |
1,912,981 | How to Make Parent Div Activate Styling of Child Div for Hover and Active States | Hey there developers! 👋 Today, we're diving into a Tailwind CSS trick! We'll explore how to make a... | 0 | 2024-07-05T15:46:00 | https://devdojo.com/bobbyiliev/how-to-make-parent-div-activate-styling-of-child-div-for-hover-and-active-states | tailwindcss, css, webdev, beginners | Hey there developers! 👋
Today, we're diving into a Tailwind CSS trick! We'll explore how to make a parent div control the styling of its child elements on hover and active states. Let's jump right in!
## The Problem
You've probably encountered situations where you want an entire component to respond to user interactions, not just individual elements.
For example, you might want a card to change its appearance when hovered, including all its child elements. Tailwind CSS has a elegant solution for this: the `group` and `group-hover` utilities.
## The Solution: Group and Group-Hover
Tailwind CSS provides a powerful feature called "group hover" that allows us to style child elements based on the state of a parent element. Here's how it works:
1. Add the `group` class to your parent element.
2. Use `group-hover:` prefix on child elements to apply styles when the parent is hovered.
Let's see this in action with a cool example:
```html
<div class="group p-6 bg-gray-100 rounded-lg transition-all duration-300 hover:bg-blue-100">
<h2 class="text-2xl font-bold text-gray-800 group-hover:text-blue-600">Awesome Feature</h2>
<p class="mt-2 text-gray-600 group-hover:text-blue-500">This feature will blow your mind!</p>
<button class="mt-4 px-4 py-2 bg-blue-500 text-white rounded group-hover:bg-blue-600">
Learn More
</button>
</div>
```
In this example, when you hover over the parent `div`, the heading, paragraph, and button all change color simultaneously. Pretty cool, right?
## Taking It Further: Group-Active
But wait, there's more! We can also use the `group-active` variant to style child elements when the parent is in an active state (e.g., being clicked). Here's an enhanced version of our previous example:
```html
<div class="group p-6 bg-gray-100 rounded-lg transition-all duration-300 hover:bg-blue-100 active:bg-blue-200">
<h2 class="text-2xl font-bold text-gray-800 group-hover:text-blue-600 group-active:text-blue-800">Awesome Feature</h2>
<p class="mt-2 text-gray-600 group-hover:text-blue-500 group-active:text-blue-700">This feature will blow your mind!</p>
<button class="mt-4 px-4 py-2 bg-blue-500 text-white rounded group-hover:bg-blue-600 group-active:bg-blue-800">
Learn More
</button>
</div>
```
Now, when you click on the component, you'll see an additional style change. It's like magic, but it's just the power of Tailwind CSS! 🎩✨
## Pro Tip: Extending Core Plugins
It's worth noting that not every property supports `group-hover` or `group-active` out of the box. In some cases, you might need to extend Tailwind's core plugins. You can do this in your `tailwind.config.js` file:
```javascript
module.exports = {
// ...other config
plugins: [
plugin(({ addVariant }) => {
addVariant('group-hover', ':merge(.group):hover &')
addVariant('group-active', ':merge(.group):active &')
})
],
}
```
This will allow you to use `group-hover` and `group-active` with any utility in Tailwind CSS.
## Wrapping Up
And there you have it! You've just learned how to make parent divs control the styling of their child elements for hover and active states using Tailwind CSS.
As a next step, check out DevDojo's Tails page builder at [https://devdojo.com/tails/](https://devdojo.com/tails/). It's a fantastic visual builder that lets you create stunning Tailwind CSS powered pages with ease. Give it a spin and see how it can supercharge your development workflow!
Keep coding, stay curious, and until next time, may your builds be bug-free and your coffee strong! 💻☕️ | bobbyiliev |
1,912,980 | NEW: Multimodal Chatbot available on Eden AI | Elevate your conversational AI experience with our Multimodal Chat feature. Seamlessly integrate... | 0 | 2024-07-05T15:45:27 | https://www.edenai.co/post/new-chat-multimodal-available-on-eden-ai | ai, api | _Elevate your conversational AI experience with our Multimodal Chat feature. Seamlessly integrate advanced multimodal capabilities into your applications to enhance user interactions and provide a richer, more engaging experience._
## What is Multimodal AI?
Multimodal AI refers to artificial intelligence systems that can process and integrate information from multiple modalities or sources of data, such as text, images, audio, video, and sensor data. The goal of multimodal AI is to combine and leverage information from these different sources to improve understanding, decision-making, and task performance.
Some key aspects of multimodal AI include:
- Enhanced Understanding: Combining different types of data allows AI to form a richer, more complete understanding of the context. For example, a system that analyzes both video and audio can better understand the emotions and actions of people in a scene.
- Improved Performance: Multimodal AI often performs better on complex tasks than unimodal systems (those that process only one type of data). This is because it can leverage complementary information from different sources.
- Robustness: By relying on multiple data sources, multimodal AI systems can be more robust and less prone to errors. If one modality is noisy or missing, other modalities can help fill in the gaps.
- Natural Interaction: Multimodal AI enables more natural and intuitive human-computer interactions. For example, voice-activated assistants that also recognize gestures can interact more effectively with users.
## What is [Multimodal Chat](https://www.edenai.co/feature/multimodal-chat?referral=new-feature-chat-multimodal)?
The [Multimodal Chatbot](https://www.edenai.co/feature/multimodal-chat?referral=new-feature-chat-multimodal) allows developers to integrate multimodal functionality into their chat applications. Multimodal Chat supports various modes of communication, including text, voice, videos and images, enabling a more dynamic and interactive user experienc. Multimodal AI Models can include text, voice, images, video, and other forms of inputs, allowing for richer and more versatile user interactions.
![Multimodal Chat feature on Eden AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fk862zv3cqnrts0bil4.jpg)
Developers may opt for a unified Multimodal Chat API to simplify integration, reduce costs, and provide a cohesive solution for comprehensive multimodal communication. This approach offers advantages in terms of consistency, maintenance ease, and enhanced user experience compared to using separate APIs for text, voice, and image processing.
## What's the difference between Multimodal AI and Multimodal Generative AI?
Generative AI is a broad term that refers to the use of ML models to create content such as text, images, music, audio, and videos, usually from a single type of request. Multimodal AI builds on these generative capabilities by processing information in different forms, including images, videos, and text. Multimodality allows AI to process and understand different sensory modes. In practice, this means that users are not restricted to a single input, but are limited to a single type of output (text).
**_[Try these APIs on Eden AI](https://app.edenai.run/user/register?referral=new-feature-chat-multimodal)_**
## Benefits of using Multimodal Chat APIs
Multimodal Chat APIs have emerged as a powerful tool for developers. They offer a range of benefits that can significantly enhance the efficiency and effectiveness of conversational tasks. Here are several advantages of using a unified Multimodal Chat API:
### 1. Simplified Integration:
Adopting a unified Multimodal Chat API simplifies the development process by providing a centralized solution for integrating multimodal capabilities. Developers can leverage a consistent set of endpoints and methods, reducing the complexity of working with multiple APIs.
### 2. Cost Efficiency:
A combined Multimodal Chat API can potentially offer cost advantages over utilizing separate APIs for text, voice, and image processing. By consolidating these functionalities into a single solution, developers can optimize their resource allocation and reduce overall costs.
### 3. Reduced Latency:
Integrating a unified Multimodal Chat API can lead to improved performance by minimizing the need for multiple API calls. With a single interface handling various communication modes, applications can experience reduced latency and faster response times, resulting in a smoother user experience.
### 4. Ease of Maintenance:
Managing and maintaining a single Multimodal Chat API is generally more straightforward compared to handling multiple APIs. Updates, bug fixes, and improvements can be applied consistently across all communication modes, reducing the complexity of maintenance tasks and ensuring a cohesive user experience.
### 5. Holistic Analytics and Reporting:
A unified Multimodal Chat API facilitates comprehensive analytics and reporting by consolidating data from various communication modes into a single interface. This approach enables developers to gain valuable insights into user interactions, preferences, and behavior, allowing for data-driven decision-making and optimization.
### 6. Flexibility in Document Handling:
With a unified Multimodal Chat API, developers gain flexibility in handling diverse communication modes within their applications. This versatility allows for customization based on specific use cases, enabling developers to adapt to evolving user preferences and emerging communication trends without the need to switch between different APIs.
## Advantages of Eden AI's Multimodal Chat Feature
Eden AI's Multimodal Chat feature offers significant advantages over traditional chat functionalities:
### Enhanced User Engagement:
By integrating both text and image capabilities, Eden AI's Multimodal Chat feature allows for richer and more engaging user interactions. Users can seamlessly switch between text and image inputs, creating a more dynamic and interactive experience.
### Future-Ready Expansion:
While the current Multimodal Chat feature supports text and image inputs, Eden AI is committed to expanding its capabilities. Future updates will include additional modes such as voice and video, ensuring that your applications remain at the forefront of conversational AI technology.
### Improved User Experience:
The combination of text and image inputs in a single chat interface enhances the overall user experience. Users can convey their messages more effectively and intuitively, leading to higher satisfaction and better communication.
### Versatile Application:
The flexibility of the Multimodal Chat feature allows developers to customize their applications based on specific use cases. Whether it's customer support, virtual assistants, or interactive learning platforms, the multimodal capabilities can be tailored to meet diverse user needs.
### Scalability:
Eden AI's Multimodal Chat API is designed to scale with your application's growth. As your user base expands and their needs evolve, the API can handle increased demand and support additional features without compromising performance.
### Innovation Potential:
By leveraging the Multimodal Chat API, developers can explore innovative use cases and create unique applications that stand out in the market. The ability to combine text and image inputs opens up new possibilities for creative and impactful user experiences.
## Access Multimodal Chat providers with one API
Our standardized API allows you to use different providers on Eden AI to easily integrate Multimodal Chat APIs into your system.
### Anthropic - Available on Eden AI
#### Claude 3 Sonnet & Claude 3 Haiku:
These models are part of Anthropic's latest AI advancements, focusing on generating highly sophisticated and contextually rich text.
- Claude 3 Sonnet is designed for creative writing tasks, providing poetic and literary outputs.
- Claude 3 Haiku specializes in producing concise and impactful text, ideal for short-form content creation.
### Google Cloud - Available on Eden AI
#### Gemini Vision 1.5 Pro & 1.5 Flash
This model integrates advanced computer vision capabilities with natural language processing, enabling the interpretation and generation of descriptive text based on visual inputs.
Gemini Vision Pro is particularly effective in scenarios where understanding and describing images is critical, such as automated content creation, image captioning, and visual data analysis.
### OpenAI - Available on Eden AI
#### GPT-4 Turbo, GPT-4o, and GPT-4 Vision:
- GPT-4 Turbo: This variant is optimized for faster responses and more efficient processing while maintaining the high-quality output of GPT-4.
- GPT-4o: A specialized version of GPT-4, tailored for tasks requiring more extensive and detailed outputs, often used in complex data analysis and comprehensive content generation.
- GPT-4 Vision: A version of GPT-4 specifically designed for multimodal tasks, integrating advanced vision capabilities to handle both text and image inputs seamlessly.
**_[Try these APIs on Eden AI](https://app.edenai.run/user/register?referral=new-feature-chat-multimodal)_**
## What are the uses of Multimodal Chat APIs?
Multimodal Chat APIs have a wide range of applications across various sectors. They can be used to enhance user interactions, streamline workflows, and provide richer, more engaging experiences. Here are some common use cases:
### 1. Customer Support
Multimodal Chat APIs can be used to improve customer support systems by allowing users to send text and images. For example, customers can upload images of their issues, and the support system can provide more accurate and context-aware responses, leading to faster resolution times.
### 2. E-commerce
In e-commerce, these APIs can enhance the shopping experience by allowing users to upload images of products they are interested in. The system can then provide detailed information, similar product recommendations, or even generate visual search results, making it easier for customers to find what they are looking for.
### 3. Education and E-learning
Educational platforms can leverage Multimodal Chat APIs to create interactive learning experiences. Students can ask questions in text and upload images related to their queries, and the system can provide detailed explanations, visual aids, and additional resources, making learning more engaging and effective.
### 4. Healthcare
In the healthcare sector, Multimodal Chat APIs can assist in telemedicine by allowing patients to send images of their symptoms along with text descriptions. Healthcare providers can then analyze the images and provide more accurate diagnoses and treatment recommendations.
### 5. Market Research
Market researchers can use Multimodal Chat APIs to analyze visual data from social media, advertisements, and other sources. By uploading images and receiving detailed attribute tables and insights, researchers can better understand consumer behavior and develop more effective marketing strategies.
### 6. Creative Industries
In creative fields such as advertising and design, Multimodal Chat APIs can be used to generate and refine concepts. Users can upload images and receive AI-generated suggestions for improvements or new ideas, streamlining the creative process and fostering innovation.
### 7. Social Media Management
Social media platforms can utilize Multimodal Chat APIs to enhance user interactions by allowing users to post text and images together. This can improve content engagement and provide richer communication options, making social media experiences more dynamic and interactive.
## How to use Multimodal AI Chatbot?
To start using Multimodal Chat you need to [create an account on Eden AI for free](https://app.edenai.run/user/register?referral=new-feature-chat-multimodal). Then, you'll be able to get your API key directly from the homepage and use it with free credits offered by Eden AI.
![Eden AI App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w1ozramygt3r92eeij8k.png)
**_[Get your API key for FREE](https://app.edenai.run/user/register?referral=new-feature-chat-multimodal)_**
## Best Practices for Using Multimodal Chat on Eden AI
When implementing Multimodal Chat on Eden AI or any other platform, it's essential to follow certain best practices to ensure optimal performance, accuracy, and security. Here are some general best practices for Multimodal Chat on Eden AI:
**- Security and Compliance:** Ensure that any Multimodal Chatbot API usage complies with data protection regulations and security standards. Implement encryption and secure authentication mechanisms, and follow best practices for handling sensitive user information.
**- Data Accuracy and Validation:** Regularly validate and cross-verify the accuracy of the data processed through the Multimodal Chat API. Implement error-checking mechanisms to identify and rectify any discrepancies in the parsed information, whether it be text or image data.
**- Version Control:** Keep track of API versions and changes. This is important to ensure backward compatibility and to manage updates without disrupting existing integrations. Regularly review and update your implementations to take advantage of new features and improvements.
## How Eden AI can help you?
Eden AI is the future of AI usage in companies: our app allows you to call multiple AI APIs.
![Multiple AI Engines in one API Key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnux8egh9hduobu04waz.gif)
- Centralized and fully monitored billing on Eden AI for all Custom Image Classification APIs
- Unified API for all providers: simple and standard to use, quick switch between providers, access to the specific features of each provider
- Standardized response format: the JSON output format is the same for all suppliers thanks to Eden AI's standardization work. The response elements are also standardized thanks to Eden AI's powerful matching algorithms.
- The best Artificial Intelligence APIs in the market are available: big cloud providers (Google, AWS, Microsoft, and more specialized engines)
- Data protection: Eden AI will not store or use any data. Possibility to filter to use only GDPR engines.
**_[Create your Account on Eden AI](https://app.edenai.run/user/register?referral=new-feature-chat-multimodal)_** | edenai |
1,912,979 | How to bypass the LockDown Browser 2024 | Overview I have developed a software in 2024 to bypass lockdown browser which is used... | 0 | 2024-07-05T15:44:32 | https://dev.to/bypassy/how-to-bypass-the-lockdown-browser-2024-11p8 | lockdown, bypass | ## Overview
I have developed a software in 2024 to bypass lockdown browser which is used widely for online exams.
This tool is perfect for using other applications, **such as Chrome**, while running LockDown Browser and for those who require remote assistance through platforms like **Anydesk, RemotePC, TeamViewer and Microsoft Teams**.
## Key Features
* **Remote Assistance Possibility**: Use Anydesk, Microsoft Teams, Remote Desktop, and TeamViewer seamlessly with LockDown Browser.
* **Window Switching**: Easily switch between windows using Alt+Tab without any notification with LockDown Browser.
* **Custom Solutions**: Need a specialized bypass for a specific exam browser? I can develop custom software to meet your exact needs.
For more information or to request a custom solution, feel free to reach out to me.
Thank you!
## Contract
[@bypassy](https://t.me/bypassy)
discord: bypassy
live:.cid.258b728263fb7085
## Demo
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnacdnpipuocufzgep5q.jpg)
| bypassy |
1,912,872 | Secure Access to Connected Devices | Hi folks, I am new to this forum. We are about to launch our first S-IMSY product and I wondered... | 0 | 2024-07-05T15:37:34 | https://dev.to/s-imsy/secure-access-to-connected-devices-cmj | discuss, security, api, cybersecurity | Hi folks,
I am new to this forum.
We are about to launch our first S-IMSY product and I wondered whether you would take a look at it and let me know whether you think it as any legs in regards to the projects you get involved with. If you do we would encourage you to give it a go, get in contact before you do that.
**SecuriSIM**
Secure network access and management in one tool
No VPNs, No Firewalls and no unmanaged internet
SecuriSIM provides simple, low cost, remote access to your connected devices using our private mobile network, anywhere in the world.
Control and Configuration with portal & API management tools
SSH & HTTP access to SecuriSIM devices directly on the network
Built-in network intelligence reduces hardware cost and footprint, redefining business models
Enable & disable connectivity & services instantly for advanced security
Worldwide coverage with real-time SIM level network steering underpinned by 256bit endpoint encryption.
All that is required is a SecuriSIM, a USB LTE dongle, a wireless external ethernet or USB modem or a connected device with an internal m.2 LTE module with a SIM slot.
Go to our website and request a SIM with or without a data bundle. Once you receive the SIM you need to register on our portal, linked from the main site, and activate your SecuriSIM with the code we supply and tick box for default configuration. There are FAQs on how to it all up from scratch.
Use this forum to ask any questions.
Many thanks!
https://www.s-imsy.com
The SecuriSIM information will appear on the site by Monday but there is plenty of information to digest.
| s-imsy |
1,913,078 | 5 (More) Rust Project Ideas ~ For Beginners to Mid Devs 🦀👨💻 | Hey there, welcome back to my blog! 👋 If you're learning Rust and want to practice your skills I... | 0 | 2024-07-07T13:40:08 | https://eleftheriabatsou.hashnode.dev/5-more-rust-project-ideas-for-beginners-to-mid-devs-1 | rust | ---
title: 5 (More) Rust Project Ideas ~ For Beginners to Mid Devs 🦀👨💻
published: true
date: 2024-07-05 15:35:58 UTC
tags: Rust
canonical_url: https://eleftheriabatsou.hashnode.dev/5-more-rust-project-ideas-for-beginners-to-mid-devs-1
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/617zny3vtympthslauhs.jpeg
---
Hey there, welcome back to my blog! 👋
If you're learning Rust and want to practice your skills I want to introduce you to 5 (more) practical projects that will help you in real-world projects. I wrote a few more similar articles, one for [**complete beginners**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-absolutely-beginners-devs), one [**for beginners**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-beginner-devs) and one for [**beginners to mid-level**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-beginner-to-mid-devs). This article is also for beginner to mid Rust devs and the focus is on building games! 🎯
{% embed https://eleftheriabatsou.hashnode.dev/5-more-rust-project-ideas-for-beginners-to-mid-devs %}
Below you'll find: the 5 project ideas, the articles where I'm explaining step-by-step how you can build these projects, and a link to the corresponding GitHub repo!
## Project Idea 5: Random Number - Guessing Game
Have you ever played a random guessing number game? Well, now you can build it in Rust! The program will randomly select a number from a range \[e.g. 1 to 15\] and the user will pick a number, then the program will print if the guessed number is too high or too low!
{% embed https://twitter.com/BatsouElef/status/1806680564038324566 %}
Read my tutorial [here](https://eleftheriabatsou.hashnode.dev/tutorial-random-number-guessing-game-in-rust):
{% embed https://eleftheriabatsou.hashnode.dev/tutorial-random-number-guessing-game-in-rust %}
Check it on [**GitHub**](https://github.com/EleftheriaBatsou/number-guess-game-rust).
## Project Idea 4: Build a Digital Clock
Have you ever tried to build a digital clock in Rust? It's a nice project idea and you'll learn many basic things (including a few things about Unicode characters)!
Read my tutorial [here](https://eleftheriabatsou.hashnode.dev/tutorial-build-a-digital-clock-in-rust):
{% embed https://eleftheriabatsou.hashnode.dev/tutorial-build-a-digital-clock-in-rust %}
Check it on [**GitHub**](https://github.com/EleftheriaBatsou/digital-clock-rust).
## Project Idea 3: 3D Cube and ASCII Animation
In this project, you can create a spinning 3D cube using ASCII and as less as dependencies as possible!
Read my tutorial [here](https://eleftheriabatsou.hashnode.dev/tutorial-simple-3d-cube-in-rust):
{% embed https://eleftheriabatsou.hashnode.dev/tutorial-simple-3d-cube-in-rust %}
Check it on [**GitHub**](https://github.com/EleftheriaBatsou/3d-cube-rust).
## Project Idea 2: Web Crawler with Surf and Async-Std
This is a practical example in Rust where you'll explore the async-await. You can do a web crawler with `Surf` and `Async-Std`.
Read my tutorial [here](https://eleftheriabatsou.hashnode.dev/tutorial-web-crawler-with-surf-and-async-std):
{% embed https://eleftheriabatsou.hashnode.dev/tutorial-web-crawler-with-surf-and-async-std %}
Check it on [**GitHub**](https://github.com/EleftheriaBatsou/web-crawler-rust/tree/main).
## Project Idea 1: Real-time Chat App
{% embed https://twitter.com/BatsouElef/status/1796457586046521434 %}
One of the most popular server backend frameworks in Rust is Rocket, and one of the great things about Rocket is the documentation and examples repository, so I was inspired to create this project: a chat application with a modern clean UI.
Read my tutorial [here](https://eleftheriabatsou.hashnode.dev/tutorial-real-time-chat-app-in-rust-with-rocket):
{% embed https://eleftheriabatsou.hashnode.dev/tutorial-real-time-chat-app-in-rust-with-rocket %}
Check it on [**GitHub**](https://github.com/EleftheriaBatsou/chat-app-rocket-rust/tree/main).
---
## **Notes**
I'm new to Rust and I hope these small projects will help you get better and improve your skills. Check here [**part 1**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-absolutely-beginners-devs), [**part 2**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-beginner-devs), [**part 3**](https://eleftheriabatsou.hashnode.dev/5-rust-project-ideas-for-beginner-to-mid-devs) **and** [**part 4**](https://eleftheriabatsou.hashnode.dev/5-more-rust-project-ideas-for-beginners-to-mid-devs) of Rust project ideas and if you need more resources I'd also like to suggest [**Akhil Sharma**](https://www.youtube.com/@AkhilSharmaTech)'s and [**Tensor's Programming**](https://www.youtube.com/@TensorProgramming) YouTube Channels.
---
👋 Hello, I'm Eleftheria, **Community Manager,** developer, public speaker, and content creator.
🥰 If you liked this article, consider sharing it.
🔗 [**All links**](https://limey.io/batsouelef) | [**X**](https://twitter.com/BatsouElef) | [**LinkedIn**](https://www.linkedin.com/in/eleftheriabatsou/) | eleftheriabatsou |
1,912,977 | Automating IT Interviews with Ollama and Audio Capabilities in Python | In today’s tech-driven world, automation is revolutionizing recruitment. Imagine having a virtual IT... | 0 | 2024-07-05T15:33:37 | https://dev.to/josmel/automating-it-interviews-with-ollama-and-audio-capabilities-in-python-545o | ai, ollama, python |
In today’s tech-driven world, automation is revolutionizing recruitment. Imagine having a virtual IT interviewer that not only interacts intelligently but also communicates verbally with candidates. This post will guide you through building an IT interviewer using Ollama and Python, integrating audio capabilities for a more immersive experience.
**📚 Introduction**
Finding the right talent can be challenging and time-consuming. With advancements in AI and audio processing, it's possible to automate the initial interview phase. This project showcases how to create an interactive IT interviewer that asks questions and processes answers through voice, using Ollama and Google Cloud's Speech-to-Text and Text-to-Speech APIs.
**🚀 What You Will Learn**
- How to set up Ollama for conversation handling.
- Integrate Google Cloud’s Speech-to-Text and Text-to-Speech APIs for audio capabilities.
- Structure a Python project to automate interviews.
**🛠️ Prerequisites**
- Python 3.7+
- Google Cloud Account: For Speech-to-Text and Text-to-Speech APIs.
- Ollama Account: For conversational AI.
**📂 Project Setup**
**1. Clone the Repository**
Start by cloning the project repository:
```
git clone https://github.com/josmel/ollama-it-interviewer.git
cd ollama-it-interviewer
```
**2. Create and Activate a Virtual Environment**
Set up a virtual environment to manage dependencies:
```
python -m venv venv
source venv/bin/activate
```
**3. Install Dependencies**
Install the required Python packages:
```
pip install -r requirements.txt
```
**4. Configure Google Cloud**
_a. Enable the APIs_
Enable the Speech-to-Text and Text-to-Speech APIs in your Google Cloud Console.
_b. Create Service Account and Download JSON Key_
1. Go to IAM & Admin > Service accounts.
2. Create a new service account, grant it the necessary roles, and download the JSON credentials file.
_c. Set the Environment Variable_
Set the environment variable to point to your credentials file
```
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-file.json"
```
Replace `/path/to/your/service-account-file.json` with the actual path to your credentials file.
5. Prepare Audio Files
Add sample audio files in the audio_samples/ directory. You need a candidate-response.mp3 file to simulate a candidate's response. You can record your voice or use text-to-speech tools to generate this file.
6. Update Configuration
Edit src/config.py to configure your Ollama credentials:
```
OLLAMA_API_URL = 'https://api.ollama.com/v1/conversations' # Or replace with your Ollama local
OLLAMA_MODEL = 'your-ollama-model' # Replace with your Ollama model
```
**7. Run the Project**
Run the interviewer script:
```
# Option 1: Run as a module from the project root
python3 -m src.interviewer
```
or
```
# Option 2: Ensure PYTHONPATH is set and run directly
export PYTHONPATH=$(pwd)
python3 src/interviewer.py
```
**📝 Detailed Explanation**
`interviewer.py`
The main script orchestrates the interview process:
```
from pydub import AudioSegment
from pydub.playback import play
from src.ollama_api import ask_question
from src.speech_to_text import recognize_speech
from src.text_to_speech import synthesize_speech
from dotenv import load_dotenv
import os
# Load environment variables
load_dotenv()
# Configure FFmpeg for macOS/Linux
os.environ["PATH"] += os.pathsep + '/usr/local/bin/'
def main():
question = "Tell me about your experience with Python."
synthesize_speech(question, "audio_samples/question.mp3")
question_audio = AudioSegment.from_mp3("audio_samples/question.mp3")
play(question_audio)
candidate_response = recognize_speech("audio_samples/candidate-response.mp3")
ollama_response = ask_question(candidate_response)
print(f"Ollama Response: {ollama_response}")
synthesize_speech(ollama_response, "audio_samples/response.mp3")
response_audio = AudioSegment.from_mp3("audio_samples/response.mp3")
play(response_audio)
if __name__ == "__main__":
main()
```
`ollama_api.py`
Handles interaction with Ollama API:
```
import requests
from src.config import OLLAMA_API_URL, OLLAMA_MODEL
def ask_question(question):
response = requests.post(
OLLAMA_API_URL,
json={"model": OLLAMA_MODEL, "input": question}
)
response_data = response.json()
return response_data["output"]
```
Converts audio to text using Google Cloud:
```
from google.cloud import speech
import io
def recognize_speech(audio_file):
client = speech.SpeechClient()
with io.open(audio_file, "rb") as audio:
content = audio.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.MP3,
sample_rate_hertz=16000,
language_code="en-US",
)
response = client.recognize(config=config, audio=audio)
for result in response.results:
return result.alternatives[0].transcript
```
`text_to_speech.py`
Converts text to audio using Google Cloud:
```
from google.cloud import texttospeech
import os
def synthesize_speech(text, output_file):
# Verify that the environment variable is set
assert 'GOOGLE_APPLICATION_CREDENTIALS' in os.environ, "GOOGLE_APPLICATION_CREDENTIALS not set"
client = texttospeech.TextToSpeechClient()
synthesis_input = texttospeech.SynthesisInput(text=text)
voice = texttospeech.VoiceSelectionParams(
language_code="en-US",
ssml_gender=texttospeech.SsmlVoiceGender.NEUTRAL
)
audio_config = texttospeech.AudioConfig(
audio_encoding=texttospeech.AudioEncoding.MP3
)
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
with open(output_file, "wb") as out:
out.write(response.audio_content)
print(f"Audio content written to file {output_file}")
```
**🎉 Conclusion**
By integrating Ollama and Google Cloud’s audio capabilities, you can create a virtual IT interviewer that enhances the recruitment process by automating initial candidate interactions. This project demonstrates the power of combining conversational AI with audio processing in Python.
Give it a try and share your thoughts in the comments! If you encounter any issues or have suggestions, feel free to ask.
**📂 Project Structure**
```
ollama-it-interviewer/
│
├── audio_samples/
│ ├── candidate-response.mp3
│
├── src/
│ ├── interviewer.py
│ ├── ollama_api.py
│ ├── speech_to_text.py
│ ├── text_to_speech.py
│ └── config.py
│
├── requirements.txt
├── README.md
└── .gitignore
```
**🛠️ Resources**
- Ollama
- Google Cloud Speech-to-Text
- Google Cloud Text-to-Speech
- Python pydub
**💬 Questions or Comments?**
Feel free to leave any questions or comments below. I’m here to help!
Repository : [https://github.com/josmel/ollama-it-interviewer](https://github.com/josmel/ollama-it-interviewer) | josmel |
1,912,976 | Wink Mod APK: Unlock the Full Potential of Your Video Editing | In the world of video editing, having access to powerful tools can make all the difference. Wink Mod... | 0 | 2024-07-05T15:31:13 | https://dev.to/andrew_tate01/wink-mod-apk-unlock-the-full-potential-of-your-video-editing-3ad7 | editing | In the world of video editing, having access to powerful tools can make all the difference. Wink Mod APK is an enhanced version of the popular Wink app, designed to provide users with a seamless editing experience. Whether you're a professional video editor or a casual creator, Wink Mod APK offers advanced features that help you bring your creative vision to life. In this blog, we'll explore the capabilities of Wink Mod APK, including its unique features, system requirements, and answers to frequently asked questions.
** What is Wink Mod APK?
** Wink Mod APK is a modified version of the Wink app, which provides additional features and functionalities that are not available in the standard version. This modded version eliminates ads, offers premium editing tools, and allows users to export videos without watermarks, giving a professional touch to your projects.
Why Choose Wink Mod APK?
For video editors seeking a comprehensive and efficient tool, Wink Mod APK stands out as an excellent choice. This version is tailored to enhance user experience by removing limitations found in the original app. By using Wink Mod APK, you can enjoy an uninterrupted editing process and leverage advanced features that elevate your creative output.
Key Features of Wink Mod APK
1. Ad-Free Experience
Enjoy a seamless editing process without interruptions from ads.
Focus entirely on your creativity without distractions.
2. No Watermarks
Export videos without the Wink watermark.
Ensure a professional look for your final product.
3. Advanced Editing Tools
Access premium tools for precise editing.
Utilize features like multi-layer editing, keyframe animation, and more.
4. High-Quality Export
Export your projects in high resolution.
Maintain the quality of your videos from editing to final output.
5. User-Friendly Interface
Navigate the app with ease thanks to its intuitive design.
Perfect for both beginners and experienced editors.
How to Install Wink Mod APK
[Download Wink Mod APK](https://winkpro.net/)
Find a trusted source to download the Wink Mod APK file.
Enable Unknown Sources:
Go to your device’s settings and enable installation from unknown sources.
Install the APK:
Locate the downloaded file and install the app on your device.
Open the App:
Launch Wink Mod APK and start editing your videos.
Tips for Using Wink Mod APK
1. Explore All Features
Take time to explore all the advanced tools and settings
available in the modded version.
2. Use Multi-Layer Editing
Experiment with multi-layer editing to add depth and complexity to your videos.
3. Leverage Keyframe Animation
Use keyframe animation for smooth transitions and precise control over motion.
4. Export in High Quality
Always export your projects in the highest resolution to ensure the best quality.
5. Stay Updated
Keep an eye out for updates to the modded APK to access new features and improvements.
Conclusion
Wink Mod APK is a powerful tool that enhances the video editing experience by providing premium features without the usual restrictions. Whether you're editing for professional purposes or personal enjoyment, Wink Mod APK offers the tools you need to create high-quality, polished videos. By eliminating ads and watermarks, and offering advanced editing capabilities, this modded version is a valuable asset for any video editor. | andrew_tate01 |
1,912,974 | Mastering Unit Testing: A Comprehensive Guide | What In Unit test we test the functions, endpoints, components individually. In this we... | 0 | 2024-07-05T15:27:33 | https://dev.to/jay818/mastering-unit-testing-a-comprehensive-guide-ing | testing, vitest, cicd, typescript | ## What
In `Unit test` we test the functions, endpoints, components individually. In this we just test the functionality of the code that we have written and mock out all the external services [ DB Call, Redis] etc.
### How
We are going to use `[Vitest](https://vitest.dev/)` for doing the testing as it has more smooth Typescript support rather than `Jest`. You can use any of them the code will look the same for both.
**Some Testing Jargons**
- **`Test suite`** - It is a collection of the test cases of a particular module. we use **`describe function`**, it helps to orgainse our test cases into groups.
```ts
describe("Testing Sum Module", () => {
// Multiple test cases for testing sum module
});
describe("Testing Multiplication Module", () => {
// Multiple test cases for testing Multiplication module
});
```
- **`Test case`** - It is a individual unit of testing, defined using `it` or `test`.
```ts
describe("Testing Sum Module", () => {
// Multiple test cases for testing sum module
it("should give 1,2 sum to be 3", () => {
// It checks that result matches or not
expect(add(1, 2))toBe(3);
});
});
```
- **`Mocking`** - It is used to mock out various external calls like DB calls. To do this we use `vi.fn()`. It creates a mock functions and returns `undefined`.
```ts
// db contains the code that we have to mock.
// it is good practice to keep the content that we have to mock in a seprate file
// vi.mock() is hoisted on the top of the test file.
vi.mock("../db", () => {
return {
// prismaClient is imported from db file
// we want to mock the prismaClient.sum.create() function
prismaClient:{
sum:{
create:vi.fn();
}
}
};
});
```
- **`Spy`** - It is basically used to spy on a function call, means as we are mocking the db call but we don't know that right arguments are passed to that db call or not, so to check that we use `spy`.
```ts
// create is the method
// prismaClient.sum is the object
vi.spyOn(prismaClient.sum, "create");
// Now to check write arguments are passed on to the create method or not we can do this
expect(prismaClient.sum.create).toHaveBeenCalledWith({
data: {
a: 1,
b: 2,
result: 3,
},
});
```
-**`Mocking Return Values`** - Sometimes you want to use values returned by an async operation/external calls. Right now we can't use any of the values as we are mocking that call. To do so we have to use `mockResolvedValue`.
```ts
//this is not the actual prismaClient object. This is the Mocked version of prismaClient that we are using that's why we are able to use mockResolvedValue.
prismaClient.sum.create.mockResolvedValue({
id: 1,
a: 1,
b: 1,
result: 3,
});
```
#### Examples
**Unit test of an Express APP**
In an express we write the app.listen in seprate file, cuz when we try to run the test everytime it will start a server & we can't hard code any PORT [ what if PORT is in use ]. So we use superset which automatically creates an use and throw server.
we create an seperate bin.ts or main.ts file which will do the app.listen.
- Run these Commands
```ts
npm init -y
npx tsc --init
npm install express @types/express zod
npm i -D vitest
// supertest allow us to make server
npm i supertest @types/supertest
// used for deep-mocking, by using this we don't have to tell which method to mock, we can mock whole prisma client
npm i -D vitest-mock-extended
```
Change rootDir and srcDir
```ts
"rootDir": "./src",
"outDir": "./dist",
```
Add a script to test in package.json
```ts
"test": "vitest"
```
Adding an DB
```ts
npm i prisma
npx prisma init
```
Add this basic schema in `schema.prisma`
```ts
model Sum {
id Int @id @default(autoincrement())
a Int
b Int
result Int
}
```
Generate the client (notice we don’t need to migrate since we wont actually need a DB)
```ts
npx prisma generate
```
Create src/db.ts which exports the prisma client. This is needed because we will be mocking this file out eventually
```ts
import { PrismaClient } from "@prisma/client";
export const prismaClient = new PrismaClient();
```
src/Index.ts
```ts
import express from "express";
import { z } from "zod";
import { prismaClient } from "./db";
export const app = express();
app.use(express.json());
const sumInput = z.object({
a: z.number(),
b: z.number(),
});
app.post("/sum", async (req, res) => {
const parsedResponse = sumInput.safeParse(req.body);
if (!parsedResponse.success) {
return res.status(411).json({ message: "Invalid Input" });
}
// const a = req.body.a;
// const b = req.body.b;
const answer = parsedResponse.data.a + parsedResponse.data.b;
// we want to mock this as empty function
const response = await prismaClient.sum.create({
// kya gurantee hai ki yeh data aise hi hoga , agar koi contributer isme change kar de to
// to solve this issue we have spy that
// abhi agar hum isme wrong input bhi pass karege tab bhi ye koi error nhi dega
// so we have to use spies
data: {
a: parsedResponse.data.a,
b: parsedResponse.data.b,
result: answer,
},
});
console.log(response.Id);
// agar user try karega to return something else it will give them a error.
// res.json({ answer, id: response.b });
res.json({ answer, id: response.Id });
});
// isme sab kuch headers mai pass hoga
app.get("/sum", (req, res) => {
const parsedResponse = sumInput.safeParse({
a: Number(req.headers["a"]),
b: Number(req.headers["b"]),
});
if (!parsedResponse.success) {
return res.status(411).json({ message: "Invalid Input" });
}
const answer = parsedResponse.data.a + parsedResponse.data.b;
res.json({ answer });
});
```
Create `__mocks__/db.ts` in the src folder, same folder in which `db.ts` resides. Its a type of convention, vitest looks for any `__mocks__` file to know what to mock.
```ts
import { PrismaClient } from "@prisma/client";
import { mockDeep } from "vitest-mock-extended";
export const prismaClient = mockDeep<PrismaClient>();
```
**`index.test.ts`**
```ts
import { describe, it, expect, vi } from "vitest";
import request from "supertest";
import { app } from "../index";
import { prismaClient } from "../__mocks__/db";
// vi.mock("../db", () => ({
// prismaClient: { sum: { create: vi.fn() } },
// }));
vi.mock("../db");
// // Mocking the return value using mockResolvedValue
// prismaClient.sum.create.mockResolvedValue({
// Id: 1,
// a: 1,
// b: 2,
// result: 3,
// });
describe("POST /sum", () => {
it("Should return the sum of 2,3 to be 6", async () => {
// Mocking the return value using mockResolvedValue
prismaClient.sum.create.mockResolvedValue({
Id: 1,
a: 1,
b: 2,
result: 3,
});
vi.spyOn(prismaClient.sum, "create");
const res = await request(app).post("/sum").send({
a: 1,
b: 2,
});
expect(prismaClient.sum.create).toBeCalledWith({
data: {
a: 1,
b: 2,
result: 3,
},
});
expect(prismaClient.sum.create).toBeCalledTimes(1);
expect(res.status).toBe(200);
expect(res.body.answer).toBe(3);
expect(res.body.id).toBe(1);
});
it("Should return sum of 2 negative numbers", async () => {
// Mocking the return value using mockResolvedValue
prismaClient.sum.create.mockResolvedValue({
Id: 1,
a: 1,
b: 2,
result: 3,
});
vi.spyOn(prismaClient.sum, "create");
const res = await request(app).post("/sum").send({
a: -10,
b: -20,
});
expect(prismaClient.sum.create).toBeCalledWith({
data: {
a: -10,
b: -20,
result: -30,
},
});
expect(prismaClient.sum.create).toBeCalledTimes(1);
expect(res.status).toBe(200);
expect(res.body.answer).toBe(-30);
expect(res.body.id).toBe(1);
});
it("If wrong input is provided, it should return 411 with a msg", async () => {
const res = await request(app).post("/sum").send({
a: "abcd",
b: 2,
});
expect(res.status).toBe(411);
expect(res.body.message).toBe("Invalid Input");
});
});
describe("GET /sum", () => {
it("should return the sum of 2,3 to be 5", async () => {
const res = await request(app)
.get("/sum")
.set({
a: "2",
b: "3",
})
.send();
expect(res.status).toBe(200);
expect(res.body.answer).toBe(5);
});
});
```
Now Run `npm run test` to test your code.
### Implementing CI Pipeline
Create an `.github/workflows/test.yml` file
This below code will automatically tests the if any code is pushed on main branch or for any pull request.
```yml
name: Testing on CI
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: 20
- name: Install dependencies
run: npm install && npx prisma generate
- name: Run tests
run: npm test
```
| jay818 |
1,912,342 | Creating a React component using Symfony UX | Introduction I have been using Angular to build my front-ends for much time and I wanted... | 0 | 2024-07-05T15:26:26 | https://dev.to/icolomina/creating-a-react-component-using-symfony-ux-1bk9 | react, symfony, ux, php | ## Introduction
I have been using Angular to build my front-ends for much time and I wanted to try another framework. I usually read a lot of posts and articles about [React](https://es.react.dev/) and I decided to start learning it. As I am a [Symfony](https://symfony.com/) lover, I decided to start my learning trying to integrate a React component by using the [symfony/ux-react](https://symfony.com/bundles/ux-react/current/index.html) component.
In this post, I will explain the steps I've followed to achieve it.
## Install Webpack Encore
[Webpack](https://webpack.js.org/) is a bundler for JavaScript applications. It takes in multiple entry points and bundles them into optimized output files, along with their dependencies, for efficient delivery to the browser.
Symfony provides a component to easily integrate webpack within your application. You can install it using composer and then npm to install the javascript required libraries:
```shell
composer require symfony/webpack-encore-bundle
npm install
```
> This post assumes you are using [symfony-flex](https://symfony.com/doc/current/quick_tour/flex_recipes.html)
After [installing webpack-encore-bundle](https://symfony.com/doc/current/frontend/encore/installation.html) you will see an **assets** directory under your project-root folder containing the following files:
- **app.js**: It is the file which will manage all your frontend dependencies. In this post case, react components. It will also import css files.
- **styles/app.css**: A file where you can put your CSS. You can use another css files. To be able to use them in your project, you should import them in the **app.js** file.
The **app.js** file contains the following content after created:
```javascript
import './styles/app.css';
```
Encore flex recipe also creates a file on your project-root folder named **webpack.config.js**. This file contains all the webpack configuration to bundle your assets.
At this point, the most important configurations are the following:
- **The Output Path**: Specifies the directory where compiled assets will be stored.
- **The Public Path**: Specifies the public path used by the web server to access the output path.
- **Entry**: Specifies the main entry file (app.js).
When the flex recipe creates the **webpack.config.js** file, it sets the previous values as follows:
```javascript
Encore
.setOutputPath('public/build/')
.setPublicPath('/build')
.addEntry('app', './assets/app.js')
;
```
Unless we need some special configuration, we can leave these values as they are.
## Installing The Stimulus Bundle
The [stimulus-bundle](https://symfony.com/bundles/StimulusBundle/current/index.html) is the component in charge of activating the other symfony ux-components you want to use in your application (in our case, symfony/ux-react).
The stimulus-bundle must be installed using composer:
```shell
composer require symfony/stimulus-bundle
npm install
```
As the installation uses Symfony Flex, after it, we will see two new files under the **assets** directory:
- **bootstrap.js**: This file starts the stimulus app so that other symfony ux-components can be installed.
```javascript
import { startStimulusApp } from '@symfony/stimulus-bridge';
const app = startStimulusApp();
```
The above code snippet shows the **bootstrap.js** file contents. It simply starts the stimulus app. We must import this file in the **app.js** file:
```javascript
import './bootstrap.js';
import './styles/app.css';
```
- **controllers.json**: Contains the ux-components which must be activated within the application.
```json
{
"controllers": [],
"entrypoints": []
}
```
The above **controllers.json** is empty because we have not installed the ux-react component yet. After installing it, we will come back to this file to analyze its content.
This recipe also adds the following line in the **webpack.config.js** file:
```javascript
Encore
// ........
.enableStimulusBridge('./assets/controllers.json')
// ........
;
```
This line enables the stimulus bridge specifying that the **controllers.json** file will contain all the ux-components to activate.
## Enabling Typescript
To enable typescript, we must follow the next steps:
### Enable typescript in webpack.config.js
You will find the following commented line in the **webpack.config.js** file:
```javascript
Encore
// ..........
//.enableTypeScriptLoader()
// ..........
;
```
We have to uncomment the above line.
### Rename app.js to app.ts
As we are going to use typescript, we must rename the app.js to use the typescript extension.
Then, we have to return to the **webpack.config.js** file and change this line:
```javascript
.addEntry('app', './assets/app.js')
```
By this:
```javascript
.addEntry('app', './assets/app.ts')
```
### Create the tsconfig.json file
The **tsconfig.json** file is a configuration file used by the TypeScript compiler to determine how to compile TypeScript code into JavaScript. It contains various settings and options that control the behavior of the TypeScript compiler, such as the target JavaScript version, module resolution, and source maps. Let's see how this file looks like:
```json
{
"compileOnSave": true,
"compilerOptions": {
"sourceMap": true,
"moduleResolution": "node",
"lib": ["dom", "es2015", "es2016"],
"jsx": "react-jsx",
"target": "es6",
},
"include": ["assets/**/*"]
}
```
> If you want to know more about the tsconfig configuration parameters, you can read the [docs](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html).
The two important parameters we have to pay attention to are the following:
- **jsx**: Specifies the JSX factory function to use
- **include**: Specifies that all the ts files under the assets folder will be compiled.
## Installing the UX-react component
Having encore and stimulus-bundle installed and typescript enabled, we are ready to install the symfony ux-react component. As always, we must use composer to install it:
```shell
composer require symfony/ux-react
npm install -D @babel/preset-react --force
```
As this component also uses Symfony Flex, after being installed it will add the following line in the **webpack.config.js**.
```javascript
Encore
// .......
.enableReactPreset()
// .......
;
```
The above line enables react in the webpack config.
This recipe also adds the following code to your app.ts file:
```typescript
import './bootstrap.js';
import { registerReactControllerComponents } from '@symfony/ux-react';
registerReactControllerComponents(require.context('./react/controllers', true, /\.(j|t)sx?$/));
import './styles/app.css';
```
The two lines after the **bootstrap.js** import enable the automatic registration for all react components located into the **assets/react/controllers** folder. It supports both jsx and tsx (typescript) extensions.
If we look now in the controllers.json file, we will see the following content:
```json
{
"controllers": {
"@symfony/ux-react": {
"react": {
"enabled": true,
"fetch": "eager"
}
}
},
"entrypoints": []
}
```
As you can see, the controllers key has a new entry. This entry enables react components and specifies an eager fetch. An eager fetch means that all the React components will be fetched upfront. If you would set the "lazy" value, the react components would be loaded only when they are required. For this article, an eager fetch can fit.
## Creating the React Component
Now it's time to create the React component. We are going to create a react component which will contains a form with an input named "amount" and a button to call a function which makes an api call.
### Creating the Api Service
Before showing the component's code, let's create a class which will contain the method to send the request call. This file must be located under the **assets/react/services** folder and must have the **.ts** extension.
```typescript
export class ApiService {
sendDeposit(amount: number): Promise<any> {
return fetch('/<your_call_url>', {
method: "POST",
mode: "same-origin",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
'amount' : amount
})
});
}
}
```
The **ApiService** class uses the global function **fetch** from the [Javascript API Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) to send the request to the server.
### Creating the component
This component must be located under the **assets/react/controllers** folder and have a **tsx** extension.
```typescript
import { useState, useEffect } from 'react';
import { ApiService } from '../services/api';
interface FormData {
amount: number;
}
export default function DepositForm () {
const [formData, setFormData] = useState<FormData>({ amount: 0 });
const [depositSuccess, setDepositSuccess] = useState<boolean>(false);
const apiService: ApiService = new ApiService();
useEffect(() => {
if (depositSuccess) {
console.log('Form submitted successfully!');
}
}, [depositSuccess]);
const handleChange = (event: any) => {
const { name, value } = event.target;
setFormData ( (previousFormData) => ({ ...previousFormData, [name]: value}) )
}
const handleForm = (event: any) => {
apiService.sendDeposit(formData.amount).then(
(r: any) => {
if (!r.ok) {
throw new Error(`HTTP error! Status: ${r.status}`);
}
setDepositSuccess(true);
}
)
}
return (
<div>
<div className="row mb-3">
<div className="col-md-12">
<div className="form-floating mb-3 mb-md-0">
<input type="text" name="amount" id="amount" className="form-control" value={formData.amount} onChange={handleChange} />
<label htmlFor="amount" className="form-label">Amount</label>
</div>
</div>
</div>
<div className="row mb-3">
<div className="col-md-12">
<button type="button" className="btn btn-primary" onClick={handleForm}>Send deposit</button>
</div>
</div>
</div>
);
}
```
Let's analyze the component step-by-step:
```typescript
import { useState, useEffect } from 'react';
import { ApiService } from '../services/api';
interface FormData {
amount: number;
}
```
- It imports the React [useState](https://react.dev/reference/react/useState) and [useEffect](https://react.dev/reference/react/useEffect) hooks
- It imports the previously created **ApiService** class
- It creates an interface to represent the form fields.
```typescript
const [formData, setFormData] = useState<FormData>({ amount: 0 });
const [depositSuccess, setDepositSuccess] = useState<boolean>(false);
const apiService: ApiService = new ApiService();
useEffect(() => {
if (depositSuccess) {
console.log('Form submitted successfully!');
}
}, [depositSuccess]);
```
- It uses the useState hook to initialize the formData amount value and the depositSuccess value.
- It creates an ApiService instance.
- It uses the useEffect hook to show a console message when depositSuccess becomes true.
```typescript
const handleChange = (event: any) => {
const { name, value } = event.target;
setFormData ( (previousFormData) => ({ ...previousFormData, [name]: value}) )
}
const handleForm = (event: any) => {
apiService.sendDeposit(formData.amount).then(
(r: any) => {
if (!r.ok) {
throw new Error(`HTTP error! Status: ${r.status}`);
}
setDepositSuccess(true);
}
)
}
```
- The **handleChange** function is used to update formData when the form amount value changes.
- The **handleForm** function sends the request using the **ApiService** sendDeposit function.
```typescript
return (
<div>
<div className="row mb-3">
<div className="col-md-12">
<div className="form-floating mb-3 mb-md-0">
<input type="text" name="amount" id="amount" className="form-control" value={formData.amount} onChange={handleChange} />
<label htmlFor="amount" className="form-label">Amount</label>
</div>
</div>
</div>
<div className="row mb-3">
<div className="col-md-12">
<button type="button" className="btn btn-primary" onClick={handleForm}>Send deposit</button>
</div>
</div>
</div>
);
```
The component html contains an input and a button. The input value property holds the **formData.amount** value and the **onChange** event executes the **handleChange** function. As the **handleChange** function updates the form data with the new values, the amount field will be updated after every change.
The button executes the **handleForm** function after being clicked.
## Calling the React component into Twig
Calling the React component into Twig is as easy as use the twig **react_component** function.
```twig
{% extends 'base.html.twig' %}
{% block title %}Make a deposit{% endblock %}
{% block body %}
<div class="container-fluid px-4">
<h1 class="mt-4">Deposits</h1>
<ol class="breadcrumb mb-4">
<li class="breadcrumb-item active">Send deposit and start generating interests</li>
</ol>
<div class="row" >
<div class="col-xl-6">
<div class="card mb-4">
<div class="card-header">
<i class="fas fa-chart-area me-1"></i>
Send deposit form
</div>
<div class="card-body">
<div {{ react_component('DepositForm')}} </div>
</div>
</div>
</div>
</div>
</div>
{% endblock %}
```
**Important**: You have to include your webpack bundled assets in your base.html.twig file (or the corresponding file in your project) so that the stimulus application is initialized and the react components are loaded. This can be done within the html head tag.
```twig
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="description" content="" />
<meta name="author" content="" />
<title>{% block title %}Welcome!{% endblock %}</title>
{% block stylesheets %}
<!-- Your other stylesheets (if there are) -->
{{ encore_entry_link_tags('app') }}
{% endblock %}
{% block javascripts %}
<!-- Your other javascripts (if there are) -->
{{ encore_entry_script_tags('app') }}
{% endblock %}
</head>
```
The **encore_entry_link_tags** and the **encore_entry_script_tags** functions include both the bundled css and scripts.
## Conclusion
This article shows how to prepare your symfony project to support react and how to create a react component and use it within your project using the **react_component** TWIG function.
Although backend and frontend applications are usually separated and communicate via APIs, this approach can be useful in situations when frontend and backend can co-live in the same project.
> If you enjoy my content and like the Symfony framework, consider reading my book: [Building an Operation-Oriented Api using PHP and the Symfony Framework: A step-by-step guide](https://amzn.eu/d/3eO1DDi) | icolomina |
1,912,962 | I'm russian developer. Why everybody hate me? | Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why... | 0 | 2024-07-05T15:22:00 | https://dev.to/__713d0caf/im-russian-developer-why-everybody-hate-me-1jdn | webdev, softskill | Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? Why everybody hate me? | __713d0caf |
1,912,556 | Mythbusting DOM: Is DOM the same as HTML? | One of the misconceptions circulating among the young generation of web developers, and not only... | 0 | 2024-07-05T15:21:03 | https://dev.to/babichweb/mythbusting-dom-is-dom-the-same-as-html-561f | One of the misconceptions circulating among the young generation of web developers, and not only them, is the belief that the DOM is actually the same HTML, just in the browser. This misconception is further fueled by the fact that the browser's DOM inspector displays everything within the webpage as the good old hypertext markup, adding more confusion to the understanding of these things.
So, as I loudly declare today: "DOM is not HTML.", the question arises, "What is it, then?" Let's try to figure it out together.
First, let's look at the dry academic definition. DOM (Document Object Model) is a programming interface that allows programs to dynamically access and update a document's content, structure, and styles. The DOM represents the document as a data structure, such as a tree, where each node is an object representing a part of the document, such as an element, attribute, or text content. Does this make a bit more sense? If you answered "yes," then I have every reason to suspect you have a bit of a pedant in you who loved "Legends and Myths of Ancient Greece" as a favourite book in kindergarten. Wait, that's about me... Oh, never mind, let's try to explain it more simply.
Let's imagine that HTML tags are images of LEGO blocks, and HTML attributes in this example are different characteristics of these images: colour, orientation in space, etc. Then our HTML documents are those instruction booklets from which you can assemble what's illustrated, like the Millennium Falcon, and then, if desired and inspired, a deep-sea pterodactyl with a pedal drive and a nuclear warhead in its butt.
And the DOM is what you've assembled from the real blocks, which you can touch with your hands. As in a construction set, each part, a DOM node, is a piece of a complex hierarchy, simultaneously having conventionally parental and child structures. You can imagine this hierarchy in various ways, so for simplification, let's imagine that our LEGO is at least four-dimensional and can simultaneously have both flat and spatial relationships.
Just as you can find the necessary part in the already assembled constructor (well, when you were assembling and realized three packets ago that you missed inserting the green block), you can also find the required element in the DOM. Moreover, you can find many different elements simultaneously based on various attributes, usually id, tag name, class, attribute value, and others.
Moreover, you can not only find these elements but also perform various useful actions with them! For instance, you can replace some blocks with others, add new ones, remove unnecessary ones, and even change the properties of these blocks. In a real constructor, this wouldn't work, but if it did, then you could. After all, we have an imaginary, four-dimensional LEGO, so let it be.
And just like a real LEGO constructor, the DOM can be genuinely interactive, i.e., it can respond to user actions. Just like in those expensive sets where you can assemble a working Lamborghini model at a 1:24 scale, with a beeping horn, a buzzing engine, opening-closing doors, and imaginary cocaine spilt on the passenger seat. I'm talking about DOM events here, not cocaine, but the fact that the DOM can handle events. For example, you press a hypothetical door, and it opens. So it is in the DOM — you press a button, an event is triggered, and you can subscribe to it, opening hypothetical doors. There are many events in the DOM, and all of them are very interesting, and now you can even create your custom events, not limited to standard ones.
So what am I saying? The DOM is not HTML but a completely separate thing that has a whole range of properties unique to it, and thanks to them, we can create attractive, interesting, and diverse web applications. | babichweb |
|
1,912,882 | Securing Generative AI Applications: A Comprehensive Approach | Generative AI technology has enabled the enhancement of effective tools to create content material... | 0 | 2024-07-05T15:19:55 | https://dev.to/calsoftinc/securing-generative-ai-applications-a-comprehensive-approach-1997 | ai, machinelearning, security, productivity | Generative AI technology has enabled the enhancement of effective tools to create content material resembling human creativity inclusive of textual content and images. However, this great functionality comes with significant concerns. Ensuring the safety of these applications is essential to prevent misuse and protect sensitive data. This blog explores the important techniques for stable generative AI applications, offering a detailed, step-by-step guide.
### Understanding the Risks of Generative AI Services
Generative AI services, while innovative, raise some security concerns. These risks encompass record breaches, unauthorized access, and the misuse of generated content. Understanding the Gen AI's abilities and weaknesses is the first step in growing a secure, generative AI environment.
**Data Breaches:** If sensitive data used to teach AI models is not well secured, it could be exposed, mainly to potential information breaches.
**Unauthorized Access:** Hackers can take advantage of vulnerabilities to benefit from unauthorized access to AI systems.
**Content Misuse:** Generated content may be manipulated for malicious purposes, which include growing fakes and posing risks for misuse.
## Building a secure foundation
To reduce these risks, it is critical to develop a safe basis for your [**generative AI services**](https://www.calsoft.ai/gen-ai/). This involves implementing robust security measures right from the development phase.
**Data Encryption:** When you encrypt data, you make it in a way that it cannot be understood by unauthorized people, even though they manipulate it to intercept it. This provides an additional layer of protection.
**Access Controls:** It’s critical to have strict right-of-entry controls to make certain that only authorized employees can access the AI structures and information. This helps reduce the risk of unauthorized access.
**Regular Audits:**Conducting protection audits on an ordinary foundation allows us to discover and deal with vulnerabilities quickly, ensuring the general protection of AI structures and information.
## Ensuring data privacy and integrity
Ensuring data privacy and integrity is vital for protective generative AI services. This includes preventing unauthorized access to the records and ensuring that the records implemented for training and content material creation are accurate and dependable.
**Data Anonymization:** Masking or anonymizing records helps in the safety and privacy of people whose records are utilized in training AI models.
**Data Validation:** It is crucial to automatically validate data to make certain of its accuracy and integrity and to verify that it has not been altered.
**Secure Storage:** Storing records in steady environments is crucial to safeguarding them from unauthorized access and ability breaches.
## Implementing robust authentication mechanisms
To avoid unwanted access to generative AI services, authentication techniques are essential. Robust authentication mechanisms can help authenticate users` identities and make certain that only authorized individuals have access to sensitive data and services.
**Using Multi-Factor Authentication (MFA)**: MFA complements protection by means of annoying multiple forms of verification.
**Biometric Authentication:** Using fingerprint or facial recognition provides a further degree of protection.
**Password Policies:** Implement strict password rules and change ordinary passwords to discourage unauthorized access and strengthen security.
## Monitoring and incident response
Continuous monitoring and having a plan to respond to incidents in the area are important for maintaining the security of generative AI services. This enables identifying and addressing safety threats in real-time.
**Continuous Monitoring:** Set up monitoring tools to keep a constant watch on device activities and detect any ordinary behaviour.
**Incident Response Plan:** It`s crucial to have a plan in place to deal with any protection breaches or incidents quickly and correctly, and it needs to be frequently reviewed and updated.
**Regular Updates:** Make certain to often replace all software program applications and protection capabilities to protect against new and changing threats.
## Securing the Development Lifecycle
Securing the whole development lifecycle of generative AI services is critical to preventing vulnerabilities from emerging at any point. This includes implementing secure coding methods and performing extensive testing.
**Secure Coding Practices:** Educate developers on steady coding practices to limit the creation of vulnerabilities during the improvement process.
**Thorough Testing:** Perform huge testing, along with security testing, to discover and address vulnerabilities prior to deployment.
**Version Control:** Employ version control structures to monitor changes and uphold the integrity of the code.
## Ethical Considerations and Responsible Use
Beyond technological measures, ethical concerns and the appropriate application of generative AI services are essential to ensuring security. This entails creating guidelines for the moral application of AI and guarding against abuse.
**Ethical Guidelines:** Develop and implement clear guidelines for the moral use of AI to prevent misuse.
**Regular Updates:** Make certain to often replace all software program applications and protection capabilities to protect against new and changing threats.
**Transparency:** Uphold transparency in AI operations to instil consideration and ensure accountability.
## Future Trends and Innovations in AI Security
Staying up to date on the evolving landscape of AI security is vital for ensuring robust security measures. Key components include:
**AI-Driven Security Tools:** Utilizing AI to create advanced security tools capable of anticipating and averting security threats.
**Blockchain for Security**:Integrating blockchain generation to boost the security and transparency of AI systems.
**Advanced Encryption Techniques:** Developing and utilizing sophisticated encryption strategies to safeguard sensitive data.
## Conclusion
Securing generative AI systems includes taking a comprehensive technique that includes technical safeguards, ethical considerations, and ongoing monitoring. Calsoft excels at presenting strong generative AI and [**data security services**](https://www.calsoftinc.com/technology/security/) tailored to your organization`s precise needs. Our experts ensure a secure foundation for your AI applications, with a robust focus on data privacy and moral guidelines. By partnering with Calsoft, you benefit from our large experience, modern technology, and commitment to innovation. Trust Calsoft to shield your sensitive data and ensure the reliability of your AI services, ensuring your enterprise thrives in the digital age.
| calsoftinc |
1,912,901 | I | A post by Артем Пустовалов | 0 | 2024-07-05T15:19:27 | https://dev.to/__713d0caf/i-4e0e | __713d0caf |
||
1,912,899 | Key stepped outlined in the Q1 & Q2 roadmap for elevating your SEO strategy, & website's potentiality | Elevate your SEO strategy with SEOSiri and watch your website soar in 2024. This Q1 & Q2... | 0 | 2024-07-05T15:18:44 | https://dev.to/seosiri/key-stepped-outlined-in-the-q1-q2-roadmap-for-elevating-your-seo-strategy-websites-potentiality-59jo | seo, webdev, strategy, digitalmarketing | ## Elevate your SEO strategy with SEOSiri and watch your website soar in 2024. This Q1 & Q2 roadmap outlines the key steps to unleashing your website's potential and achieving remarkable growth.
Dominate Search & Drive Results: Align Your SEO Strategy and Optimize for Peak Traffic from the consecutive answer to those questions that answer competitive win over the competitor's Search Position in the SERPs on Search Volume:
How can I ensure my SEO strategy is aligned with the latest trends and best practices?
How can I optimize my website to achieve peak performance and attract more traffic?
that deeply anchored with this post context (SEOSiri Newsletter - Q1 2024: Elevate Your SEO Game, SEOSiri Newsletter - Q2 2024: Boost Your Website's Performance:
Read more- [Key stepped outlined in the Q1 & Q2 roadmap for elevating your SEO strategy, & website's potentiality](https://www.seosiri.com/2024/07/seo-strategy-website-potentiality.html)
#seo #seostrategy #seostrategies #seosiri #websiteseo #blogseo #organictraffic #websiteoptimization #digitalmarketing | seosiri |
1,912,894 | Using livekit for Video Conferencing in ChatGPT | Using livekit for Video Conferencing in ChatGPT Detailed... | 0 | 2024-07-05T15:16:26 | https://dev.to/dairoot/using-livekit-for-video-conferencing-in-chatgpt-2jk4 | chatgpt, livekit, voice | Using livekit for Video Conferencing in ChatGPT
Detailed Document:https://github.com/dairoot/chatgpt-livekit/tree/main
## Usage
Modify the token value in the file, then execute the command. The output URL will be generated. Open it in your browser.
```bash
python main.py
```
main.py file
```python
import uuid
import requests
chatgpt_token = None
def get_livekit_url():
headers = {
"content-type": "application/json",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36",
"authorization": "Bearer {}".format(chatgpt_token),
}
res = requests.post("https://chatgpt.com/voice/get_token", headers=headers, cookies={'__cf_bm': ''}, json={
"voice": "cove",
"voice_mode": "standard",
"parent_message_id": str(uuid.uuid4()),
"model_slug": "auto",
"voice_training_allowed": False,
"enable_message_streaming": False,
"language": "zh",
"video_training_allowed": False,
"voice_session_id": str(uuid.uuid4())
}).json()
# livekit url
livekit_url = "https://meet.livekit.io"
url = "{}/custom?liveKitUrl={}&token={}#{}".format(livekit_url, res["url"], res["token"], res["e2ee_key"])
return url
if not chatgpt_token:
print("Get ChatGPT Token: https://chatgpt.com/api/auth/session")
else:
print(get_livekit_url())
```
![show](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/koxfecfg94ayb4kwgsz7.jpg)
If you don't have a ChatGPT Token and want to try it out, you can directly visit: https://chatgpt.dairoot.cn and click on `免费体验` | dairoot |
1,912,898 | Dive into the World of Digital Media with the iTunes Store Course 🍎 | Learn how to connect to the iTunes Store, a leading digital media platform, with step-by-step guidance on downloading and using the iTunes software. | 27,844 | 2024-07-05T15:15:23 | https://dev.to/getvm/dive-into-the-world-of-digital-media-with-the-itunes-store-course-2imc | getvm, programming, freetutorial, universitycourses |
As someone who's always on the lookout for new and exciting learning opportunities, I recently stumbled upon a gem of a course that I just had to share with you all. It's called "CSCI 5710 | e-Commerce Implementation | iTunes Store Access," and it's a comprehensive guide to navigating the vast and ever-evolving world of the iTunes Store.
![MindMap](https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/authcode/?code=ZTQyMDkxNjRhMjcwNDdmYjVhMzhjN2JhYjQ1NTBmMzJfZDk3OGY1Njg0MzMyNzk0YjY3MDE2OGIyMWE3MDRlMjJfSUQ6NzM4ODE3MDU1NjU3MjQ4MzU4OF8xNzIwMTkyNTIxOjE3MjAyNzg5MjFfVjM)
## Unlock the Power of the iTunes Store 🔑
This course is a true treasure trove of knowledge, providing step-by-step instructions on how to download and use the iTunes software. But that's just the beginning! You'll also learn how to access and explore the iTunes Store, a leading digital media platform that offers a vast selection of music, movies, TV shows, apps, and more.
## Discover a World of Digital Content 🌍
Whether you're looking to expand your personal music library, catch up on the latest blockbuster films, or dive into a new mobile game, the iTunes Store has it all. And with the guidance provided in this course, you'll be able to navigate the platform with ease, finding exactly what you're looking for and making seamless purchases.
## Unlock Your Creativity and Productivity 🎨
But the benefits of this course don't stop there. By mastering the iTunes Store, you'll unlock a world of creative and productivity-boosting tools. Imagine being able to download the latest design software, productivity apps, or even educational resources to enhance your personal or professional life. The possibilities are endless!
So, if you're ready to take your digital media experience to the next level, I highly recommend checking out the "CSCI 5710 | e-Commerce Implementation | iTunes Store Access" course. You can access it by visiting the [iTunes Store](https://itunes.apple.com/us/itunes-u/e-commerce-implementation/id1020427670) and diving in. Trust me, it's a game-changer! 🚀
## Supercharge Your Learning with GetVM's Playground 🚀
While the "CSCI 5710 | e-Commerce Implementation | iTunes Store Access" course provides a wealth of theoretical knowledge, the real magic happens when you put that knowledge into practice. That's where GetVM's Playground comes in!
This powerful online coding environment allows you to dive right into the course content and experiment with the concepts you're learning. With just a few clicks, you can access a pre-configured virtual machine, complete with all the necessary tools and resources, and start coding away. No more hassle with set-up or environment configuration – just pure, uninterrupted learning and hands-on experience.
The GetVM Playground is the perfect complement to the iTunes Store course, enabling you to test your understanding, experiment with different approaches, and solidify your knowledge through practical application. Whether you're a seasoned programmer or a complete beginner, the Playground's intuitive interface and comprehensive support make it easy to dive in and start coding.
So, why not take your learning to the next level? Head over to the [GetVM Playground](https://getvm.io/tutorials/csci-5710-e-commerce-implementation-fall-2015-etsu-itunes) and start putting your newfound iTunes Store knowledge into action. Trust me, the sense of accomplishment you'll feel as you see your code come to life will be truly rewarding. 🎉
---
## Practice Now!
- 🔗 Visit [CSCI 5710 | e-Commerce Implementation | iTunes Store Access](https://itunes.apple.com/us/itunes-u/e-commerce-implementation/id1020427670) original website
- 🚀 Practice [CSCI 5710 | e-Commerce Implementation | iTunes Store Access](https://getvm.io/tutorials/csci-5710-e-commerce-implementation-fall-2015-etsu-itunes) on GetVM
- 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore)
Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄 | getvm |
1,912,897 | Pokemon infinite fusion all fusions | Welcome to the fascinating world of Pokémon Infinite Fusion! This unique fan-made game takes the... | 0 | 2024-07-05T15:14:44 | https://dev.to/pokemone_fusion_3e39327cc/pokemon-infinite-fusion-all-fusions-9nf | Welcome to the fascinating world of Pokémon Infinite Fusion! This unique fan-made game takes the beloved Pokémon franchise to an entirely new level by allowing trainers to combine any two Pokémon into a single, entirely new creature. With over 175,000 possible fusion combinations, the adventure is truly endless. Whether you're looking to create the ultimate battle-ready Pokémon or just want to see what wild and imaginative creatures you can come up with, Pokémon Infinite Fusion offers a creative and immersive experience that keeps fans coming back for more. Dive in, experiment with different fusions, and explore a world where the possibilities are as limitless as your imagination! [Read more](https://pokemoneinfinitefusion.com/pokemon-infinite-fusion-items/). | pokemone_fusion_3e39327cc |
|
1,912,896 | Real-time Disk Size Expanding in a Linux virtual machine | I was compiling the Linux kernel from source and i had a vm running AlmaLinux with 20GB Logical... | 0 | 2024-07-05T15:14:35 | https://dev.to/wassim31/real-time-disk-size-expanding-in-a-virtual-machine-on-linux-machine-307i | I was compiling the Linux kernel from source and i had a vm running AlmaLinux with 20GB Logical volume, so i attained the maximum size, i had to expend my disk size without losing my data, the progress of a 12 hours compilation process(yes heavy kernel, and a weak hardware)
this solution is used in storage management in cloud services providers AWS, Azure ..
**_Linux Distribution_** : AlmaLinux 9.4 on Vmware Workstation 7
When you expand the disk size of your Linux virtual machine (VM) on VMware, you must adjust the partitions within the system to utilize the additional space.
This guide will walk you through the steps required to resize the partitions and filesystems using bash.
**Prerequisites**
Before you begin, ensure you have:
Expanded the disk size of your VM in VMware.
Root or sudo access to your Linux VM.
**Step 1:** Identify the New Disk Size
First, verify the current disk and partition sizes using the lsblk command:
```
lsblk
```
The output will show all block devices and their partitions. Identify the disk you have expanded. For example:
```
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 988M 0 rom
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─almalinux-root 253:0 0 17G 0 lvm /
└─almalinux-swap 253:1 0 2G 0 lvm [SWAP]
```
**Step 2:** Resize the Partition
Use the **growpart** utility to resize the partition. If it's not already installed, you can install it with:
```
sudo dnf install cloud-utils-growpart
```
Then, resize the partition nvme0n1p2:
```
sudo growpart /dev/nvme0n1 2
```
**Step 3:** Resize the Physical Volume (PV)
Next, resize the physical volume to recognize the expanded partition:
```
sudo pvresize /dev/nvme0n1p2
```
**Step 4:** Verify the Physical Volume Size
Verify the changes using the pvs command:
```
sudo pvs
```
**Step 5:** Resize the Logical Volume (LV)
Assuming you want to allocate all the new space to the root logical volume (almalinux-root):
```
sudo lvextend -l +100%FREE /dev/almalinux/root
```
**Step 6:** Determine the Filesystem Type
Before resizing the filesystem, determine the filesystem type. You can do this with the **df -Th** or **lsblk -f** commands:
```
df -Th
```
or
```
lsblk -f
```
**Step 7:** Resize the Filesystem
Depending on the filesystem type, use the appropriate command to resize the filesystem.
**For ext4:**
```
sudo resize2fs /dev/almalinux/root
```
**For xfs:**
```
sudo xfs_growfs /
```
**Step 8:** Verify the Changes
Finally, verify that the new space is available using the lsblk and df -h commands:
```
lsblk
df -h
```
**Conclusion**
By following these steps, you can effectively utilize the additional disk space after expanding your Linux VM disk on VMware. | wassim31 |
|
1,912,895 | Load modules on specific page | Hi! The problem is i want to load modules on specific page or route, like when my project loads i do... | 0 | 2024-07-05T15:12:06 | https://dev.to/ateeq_ashiq/load-modules-on-specific-page-ige | Hi!
The problem is i want to load modules on specific page or route, like when my project loads i do not want to load some modules like Paypal to load on homepage i wants Paypal module only load when there is /Checkout route. | ateeq_ashiq |
|
1,910,958 | Announcing the Alpha Release of xstate-ngx! | I am very excited to announce the alpha release of xstate-ngx! This marks a significant milestone in... | 0 | 2024-07-05T15:09:43 | https://www.wordman.dev/blog/xstate-ngx-announcement/ | angular, xstate, webdev | ---
canonical_url: https://www.wordman.dev/blog/xstate-ngx-announcement/
---
I am very excited to announce the alpha release of **xstate-ngx**! This marks a significant milestone in integrating XState with Angular, and I can't wait for you to try it out and share your feedback.
For now, the project is published under `xstate-ngx`. However, we're planning to move it into the official XState monorepo once discussions are finalized and your feedback has been implemented. You can track the progress and discussions in the [related PR](https://github.com/statelyai/xstate/pull/4816/files).
## What is xstate-ngx
You might be wondering, what is xstate-ngx?! The official xstate documentation says the following:
> XState is a state management and orchestration solution for JavaScript and TypeScript apps.
> It uses event-driven programming, state machines, statecharts, and the actor model to handle complex logic in predictable, robust, and visual ways. XState provides a powerful and flexible way to manage application and workflow state by allowing developers to model logic as actors and state machines. It integrates well with React, Vue, Svelte, and [...]
and now there is an angular integration! xstate-ngx uses the primitives that XState provides and provides a thin wrapper to utilize Angular's Dependency Injection mechanism and signals.
## Why Alpha?
This alpha release aims to give you a taste of the developer experience with `xstate-ngx`. I want you to play with it, explore its capabilities, and most importantly, provide feedback on the general API design. Your input is crucial in shaping the future of this integration.
## Getting Started
To help you get started, we've provided several examples on [GitHub](https://github.com/niklas-wortmann/xstate-angular). Here's a quick highlight of how you can use `xstate-ngx` in your projects:
### Example: Simple Toggle Machine
```typescript
import { createMachine, interpret } from 'xstate';
import { useMachine } from 'xstate-ngx';
import { Component, inject } from '@angular/core';
// Define your machine
const toggleMachine = createMachine({
id: 'toggle',
initial: 'inactive',
states: {
inactive: {
on: { TOGGLE: 'active' },
},
active: {
on: { TOGGLE: 'inactive' },
},
},
});
const ToggleMachineService = useMachine(toggleMachine);
@Component({
selector: 'app-toggle',
providers: [ToggleMachineService],
template: `
<button (click)="toggleMachine.send('TOGGLE')">
{{ toggleMachine.snapshot().value === 'inactive' ? 'Off' : 'On' }}
</button>
`,
standalone: true
})
export class ToggleComponent {
protected toggleMachine = inject(ToggleMachineService)
}
```
In this example, we define a simple toggle state machine and create an Injectable using `useMachine` from `xstate-ngx`. The returned Service can then be used in a component. The `snapshot` property is a signal allowing for fine-grained reactivity, but also enabling to easily derive state by using the `compute` function.
## Special Thanks
I couldn't have reached this milestone without the invaluable contributions and support from the community:
- [**Enea Jahollari**](https://x.com/Enea_Jahollari) and [**Chau Tran**](https://x.com/Nartc1410) for their first round of feedback.
- [**David Khourshid**](https://x.com/DavidKPiano) and [**Mateusz Burzyński**](https://x.com/AndaristRake) for their outstanding work on XState, their interest in an Angular implementation, and their insightful discussions about this topic.
- The design and implementation are heavily inspired by NgRx Signals, so many thanks to the NgRx team!
## Join the Conversation
Your feedback is essential to us. Join the conversation, try out the alpha release, and let us know your thoughts. For now the best place to share any kind of feedback is the [xstate-angular repository](https://github.com/niklas-wortmann/xstate-angular). Together, we can make `xstate-ngx` a robust and delightful tool for the Angular community.
Let me know what you think and happy coding!
| niklas_wortmann |
1,912,893 | Lado Okhotnikov on mocap technology for Meta Force | Lado Okhotnikov and his revolutionary move to develop virtual realms. How Motion Capture technology... | 0 | 2024-07-05T15:08:09 | https://dev.to/blogger_ali_15ccbe5fde110/lado-okhotnikov-on-mocap-technology-for-meta-force-1jpb | Lado Okhotnikov and his revolutionary move to develop virtual realms. How Motion Capture technology is introduced in the Meta Force Metaverse
The recently published [article on the mo-cap technology](https://beincrypto.com/lado-okhotnikov-about-an-integral-part-of-the-project/) reveals an interesting case of its introduction to metaverses. Lado Okhotnikov, the visionary behind Meta Force, asserts innovative technologies are just necessary to create something intriguing and innovative in the digital realms. In particular, his platform tried to take advantage of the mocap technology. It has become indispensable in the realm of expensive projects worth millions of dollars. It is really preferable to resort to it when a large budget is involved. The article shows how it works.
Why integrate Motion Capture
This innovative technology is used, for instance, in blockbuster games such as Halo 4, LA Noire, and Beyond: Two Souls and lots of other hits. The technology enables actors to infuse their digital counterparts with lifelike movements, transcending traditional voice acting to embody characters authentically on-screen.
A new standard in Metaverse experiences
Meta Force also utilizes the technology pursuing unparalleled realism. According to the Metaworld'd concept, its inhabitants should seamlessly mirror reality. It is important to achieve it for complete immersion of users, without any artificial undertones.
Lado Okhotnikov envisions setting a new standard in Metaverse experiences. He was inspired by notable games and decided to borrow some techniques from games like [GTA V](https://www.rockstargames.com/gta-v
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghsu9mowsr7c03gye0yo.png)). What stands them apart are character animations, which are derived from real-life performances rather than computer simulations.
Mo-cap in Meta Force: how it happens
Lado says that understanding the mo-cap mechanics was just the beginning. The importance of spontaneity and improvisation became evident as the team explored virtual environments. It involved rigorous experimentation and even Lado personally participated in it. He wanted to ensure that his digital avatar reflects true-to-life movements.
Achieving realism in tasks like this is a continuous process. In Meta Force the dedicated team of professionals, including mo-cap designers, works over it. The members of the team try to refine every detail. Their expertise enhances animations, ensuring that the final product is visually captivating and devoid of the artificiality found in conventional animation.
The role of the mocap designer is instrumental. The attempt to breathe life into visuals adds new dynamism to the objects. There are numerous challenges on the way as users delve deeper into the animation.
Meta Force by Lado Okhotnikov represents more than just a virtual platform since it embodies a paradigm shift in the development of virtual worlds. The team’s vision is to blur the frontiers between the real and virtual worlds. The platform is introducing a new and original way of evolving virtual environments on a worldwide scale. MetaForce is breaking new ground that offers novel ideas and methods within the realm of virtual environments.
Promising project by Lado Okhotnilov
The project can acquire tremendous popularity. The platform is looking forward to a future scenario where many tools are available for users to interact within the Metaverse. The members of the community will be able to explore and operate within a vast and unrestricted space.
Lado Okhotnikov always emphasizes the concept of decentralization, regardless of the activity. It forms an absolutely different approach within the community and helps to revolutionize the platform and develop without the control of a single entity. It empowers users to navigate a limitless, decentralized realm and employ almost unlimited possibilities for growth and development.
About company
Meta Force is a company developing unique Metaverse based on Polygon blockchain. The Metaverse is optimized for business applications.
Lado Okhotnikov is a CEO of Meta Force, expert in IT and crypto industry.
Based on Dan Michael materials
The head of Meta Force Press Center
[email protected]
#ladookhotnikov
| blogger_ali_15ccbe5fde110 |
|
1,912,883 | Automating User Creation and Management with Bash | As a SysOps engineer, managing users and groups in a Linux environment can be a repetitive and... | 0 | 2024-07-05T15:05:42 | https://dev.to/diokpa/automating-user-creation-and-management-with-bash-2mek | webdev, hng, linux, devops | As a SysOps engineer, managing users and groups in a Linux environment can be a repetitive and time-consuming task. To streamline this process, we can leverage a bash script to automate user creation, group assignments, home directory setup, password generation, and logging. This article walks through the implementation of such a script, create_users.sh, which reads user information from a text file and performs the necessary operations. This solution is especially useful when onboarding new employees.
Script Requirements and Functionality
Our script will:
Read a text file containing usernames and groups.
Create users with personal groups matching their usernames.
Assign users to additional specified groups.
Set up home directories with appropriate permissions.
Generate random passwords for new users.
Log all actions to /var/log/user_management.log.
Store generated passwords securely in /var/secure/user_passwords.csv.
Implementation Details
1. Script Initialization and Input Validation
The script starts by defining log and password files. It then checks if the input file is provided and exists:
bash
Copy code
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.csv"
if [ $# -eq 0 ]; then
echo "Usage: $0 <name-of-text-file>"
exit 1
fi
if [ ! -f $1 ]; then
echo "Error: File $1 not found!"
exit 1
fi
2. Secure Directory and File Setup
We ensure that the directory for storing passwords exists and has the correct permissions:
bash
Copy code
mkdir -p /var/secure
touch $PASSWORD_FILE
chmod 600 $PASSWORD_FILE
3. Reading and Processing the Input File
The script reads the input file line by line, creating users and assigning them to groups as specified:
bash
Copy code
while IFS=';' read -r username groups; do
username=$(echo "$username" | xargs)
groups=$(echo "$groups" | xargs)
if id -u "$username" >/dev/null 2>&1; then
log_message "User $username already exists"
else
useradd -m -g "$username" -s /bin/bash "$username"
log_message "User $username created with primary group $username"
chmod 700 /home/$username
chown $username:$username /home/$username
password=$(openssl rand -base64 12)
echo "$username:$password" | chpasswd
echo "$username,$password" >> $PASSWORD_FILE
log_message "Password for user $username set and stored securely"
fi
IFS=',' read -ra additional_groups <<< "$groups"
for group in "${additional_groups[@]}"; do
group=$(echo "$group" | xargs)
if [ $(getent group "$group") ]; then
usermod -aG "$group" "$username"
log_message "User $username added to group $group"
else
groupadd "$group"
usermod -aG "$group" "$username"
log_message "Group $group created and user $username added"
fi
done
done < "$1"
Logging and Security
All actions are logged for auditing purposes. Passwords are stored securely with restricted access to ensure only the file owner can read them.
**Conclusion**
Automating user and group management in Linux environments can significantly reduce administrative overhead. The create_users.sh script provides a robust solution for onboarding new users, ensuring that they are set up with the necessary permissions and groups efficiently. For more details about the HNG Internship and opportunities,
visit https://hng.tech/internship &
https://hng.tech/premium
By automating these tasks, SysOps engineers can focus on more critical aspects of system administration, improving overall productivity and system security.
| diokpa |
1,912,877 | Apirone makes crypto management easy with auto-transfers | In the fast-paced world of cryptocurrency, managing your digital assets efficiently can be... | 0 | 2024-07-05T15:01:55 | https://dev.to/apirone_com/apirone-makes-crypto-management-easy-with-auto-transfers-9d1 | cryptocurrency | In the fast-paced world of cryptocurrency, managing your digital assets efficiently can be challenging. Whether you’re dealing with fluctuating markets, multiple wallets, or complex transaction schedules, managing your crypto business demands time and precision. Apirone, a cryptocurrency gateway service, addresses these needs with its innovative feature: automatic transfer or forwarding of payments. This tool promises to simplify crypto management, and an exciting update will soon introduce recurrent payments, further enhancing the feature's utility.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90jxnmpakwljs47wbwma.png)
## Streamlining crypto management with automatic withdrawals
Auto-transfer by Apirone is a game-changer for anyone involved in the cryptocurrency space. This feature allows users to automate the withdrawal of their digital assets from their Apirone account to any designated wallet or exchange. By automating these processes, Apirone eliminates the need for manual management, making crypto operations more efficient and less prone to errors.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lagkh2lp9leqxg4piuzd.png)
## How it works:
- Set and forget: Users can configure their withdrawal settings once, specifying the destination address. Once set, Apirone handles the rest, executing withdrawals according to the user’s settings.
- Advanced security: Every transaction processed through auto-transfer is safeguarded by Apirone robust security measures, including isolated wallets and no ‘common pools’, ensuring that your assets are always protected.
- Real-time monitoring: Users have access to a comprehensive dashboard where they can monitor the status of their automated withdrawals, and track transaction history.
This feature is particularly beneficial for users who need to regularly move their crypto to different wallets for investment, storage, or payment purposes. By reducing the need for manual transfers, Apirone helps users avoid the pitfalls of human error and the inconvenience of repetitive tasks.
## Upcoming recurrent payments: automating your crypto expenses
Adding another layer of convenience, Apirone is set to introduce recurrent payments to its auto-transfer feature soon. This upcoming update will allow users to schedule regular payments in cryptocurrency, akin to standing orders in traditional banking. Whether you need to pay for subscriptions, handle regular withdrawal of a definite amount, or manage periodic investments, recurrent payments will simplify these processes.
## What to expect:
- Flexible scheduling: Users will be able to set up recurring payments to occur daily, weekly, monthly, or at any custom interval that suits their needs.
- Enhanced control: The feature will provide options to modify scheduling rules if they need to change time of withdrawal or switch it off at all, ensuring that users have full control over their finances.
## How to get started in 3 easy steps:
- Log in to Apirone account: Access your account or sign up.
- Navigate to “Auto-transfers”: Find the new feature in the dashboard menu.
- Set up the rules: Configure the transfer details, including frequency, amount, and destinations.
With recurrent payments, [Apirone](https://apirone.com/) makes crypto transactions as seamless as traditional financial services, bringing users the advantages of fiat banking convenience and the innovative world of blockchain.
| apirone_com |
1,912,878 | Introducing the graceful and premium custom USB rigid boxes | Introduction USB boxes are ideal for photographers, designers, and printers. Further, the... | 0 | 2024-07-05T14:59:26 | https://dev.to/weprintboxes/introducing-the-graceful-and-premium-custom-usb-rigid-boxes-dae | ## Introduction
USB boxes are ideal for photographers, designers, and printers. Further, the exclusive gift boxes are modern and perfect for USB sticks. This presents a fantastic gift idea for such a popular and essential in all kinds of business.
Firstly, we provide high-end custom USB packaging for your USB device to deliver them damage-free at your customer's doorstep. In contrast, USB gifts are more attractive and memorable. The graceful design of the packaging fascinates and motivates newbies.
Secondly, USB packaging with decent colour and appealing design attracts the customer. So, [USB rigid boxes](https://www.weprintboxes.com/usb-rigid-boxes/) with product info such as speed, storage capacity, and usage details show your care about your product and customer.
In short, the label of the products holds your brand identity to the next level. Moreover, the USB box looks stunning and eye-catching to amplify the beauty of your deluxe USB drive. So, to make a USB a perfect gift you need custom USB packaging with exclusive opening and closing styles.
## Functionality and Features
The USB drive with foam insert provides optimal protection for your valuable data storage devices. So, the foam inserts inside the box securely hold your USB flash and prevent any potential damage during transportation.
In contrast, these boxes contribute and play a significant role in maximizing the beauty of your product and display shelves. Further, these boxes are durable enough to keep your USB drive safe from shocks and adverse external conditions.
Moreover, the lightweight design of these boxes is perfect for individuals who frequently carry USB flash drives on the go. So, this feature is particularly beneficial for photographers, students, and professionals who rely on USB.
## Application and versatility
The USB drive box is suitable for a wide range of applications. So, you need to store important document files and other digital content. This box provides a secure solution.
Further, the box that comes with these tiny storage devices is helpful and adds to its charm and appeal. USB custom boxes give you an appealing and creative option. So, our customers can enjoy the various colour designs from which your flash drive can come.
Moreover, this versatile storage solution is not only a USB. It also accommodates other accessories like memory cards and earphones. These boxes not only make excellent gift-giving options but also, leave the receivers with a lasting impression.
## Durable material and design
The USB flash drive box uses durable material to ensure long-lasting performance. Further, the box's outer shell protects against impact and external elements. The cardboard boxes are more environmentally friendly because they damage the environment long-term.
Additionally, USB packaging that is of high quality and looks nice is often kept for a long time. So, USB drive's rigid boxes protect the product from damage. Hence, it needs to be sturdy and reliable.
Moreover, the boxes help maintain the integrity of the item inside. In short, the boxes are small but they're reliable. These boxes are strong and secure your USB stick with a ribbon and tie around the box insert for tight fit. USB packaging design is an option available according to your needs.
In contrast, USB boxes are available in different sizes and designs. The USB [custom rigid boxes](https://www.weprintboxes.com/custom-rigid-boxes/) are an excellent match for your gift. We believe the boxes are a reflection of your product.
## Popular types of USB rigid packaging
Today the customer enters the world of premium packaging. So, you have a tone of personalization to make your boxes one of a kind. We provide colour models with offset and digital printing choices.
Moreover, you choose the appropriate materials for your boxes, if you want to maintain product safety. We also provide all these custom USB flash drive designs and styles for your requirements.
In short, use an attractive sophisticated USB enclosure that secures your device and stands out from the business. In addition to shape and size, we provide the following variety to our clients with the categories of customization possibilities. Let see;
· Gable Tuck-end USB packaging
· Custom foam insert boxes
· Sleeve USB packaging
· Window cut boxes
## Gable tuck-end USB packaging
Gable boxes are sturdy packaging boxes. Further, the custom gable box adds value and style to your item. A gable box style has a tuck-in front that stores your product with an easy step. So, our tuck-end gable boxes feature sufficient storage sizes for use.
Moreover, our boxes have become an ideal choice for the customer. So, the gable box keeps your product safe and secure.
## Custom foam insert box
Custom foam inserts are soft pieces that fit perfectly into cases. In contrast, these insert boxes are essential because they prevent damage by safely holding items during shipping.
Further, our custom foam insert packaging organizes and protects delicate items like USBs, cameras, and other electronics. So, our foam inserts rigid boxes protect temperature-sensitive products.
In short, we provide different types of luxury foam insert rigid boxes.
## Sleeve USB packaging
Everyone loves slide-in boxes because they bring joy through unboxing. Further, custom foldable trays and sleeve boxes combine a sliding tray and sleeve. Our luxury sleeve boxes are customized with various styles, colour themes, designs, and materials.
Moreover, our sleeve packaging solution delivers a visually appealing high-end unboxing experience. These boxes are perfect for packaging lightweight products and are fully customizable so that you showcase your brand.
## Window cut boxes
Die-cut window packaging solutions elegantly showcase your product. Also, it provides protection and enhances brand visibility. Window boxes come in various materials, each saving a unique purpose.
In contrast, the versatility of boxes is evident in the application. The packaging type also offers a balance between protection and product display, ensuring items remain intact during handling and display. These boxes are eco-friendly. | weprintboxes |
|
1,912,817 | What is a DI Container? | In the last post in this series we learned about dependency injection. We saw that we needed to... | 27,962 | 2024-07-05T14:54:14 | https://dev.to/emanuelgustafzon/what-is-a-di-container-468h | programming, dependencyinversion, designpatterns | In the last post in this series we learned about dependency injection.
We saw that we needed to manually create an instance of the database connection class before creating an instance of the post routes class. Then, we passed the connection instance into the constructor of the posts object.
Imagine, though, if we could just create an instance of the post routes class, and the system itself would automatically know that the posts class depends on a database connection and create that instance for us 💡.
There you have it! A Dependency Injection Container is doing just that.
A DI Container is responsible for creating instances of classes and managing their dependencies. Here’s a more detailed breakdown:
## Registering Services
First, you register your classes with the container and specify their dependencies. For example, you tell the container that PostRoutes depends on a DatabaseConnection.
## Resolving Services
When you ask the container to resolve a service (i.e., create an instance of a class), the container:
1. Creates an instance of the class.
2. Resolves all the dependencies that the class needs by creating instances of those dependencies as well.
This means that the container automatically creates and injects instances of all the required dependencies, simplifying the process of creating complex objects with multiple dependencies.
# The lifetime of instances.
You need to keep in mind how long an instance live before it gets renewed. We call this scope.
The 3 most common scopes are `transient`, `singleton` and `request` or even called `scoped` in .NET.
## Singleton
The singleton scope means that the DI Container creates one instance of the service and keeps that same instance throughout the program’s lifetime. Every time you resolve the service, you get the same instance. This is useful for services that maintain state or are expensive to create and should be shared across the entire application.
#####Example: Logging Service
A logging service that logs messages to the console can be a good use case for a singleton service. A logging service keeps the same state and stay consistent throughout the program.
## Transient
Transient scope means that the DI Container creates a new instance of the service each time it is resolved. This is useful for lightweight, stateless services where each operation should be independent of others.
#####Example: Unique ID Generation Service
A service that generates unique IDs might be a good use case for a transient service. Each request to this service should produce a new, unique value without retaining any state from previous requests.
## Scoped or Request
Scoped services last as long as the HttpContext lasts. This scope is particularly useful in web applications.
When working in the backend, you receive HTTP requests from clients. Each request has a context, known as the HttpContext, which lasts from when the client sends the request to the backend until the backend sends a response back to the client. During this period, multiple operations might occur, such as connecting to a database, fetching data to authorize the user, retrieving various resources, etc.
#####Example: Database Connection Service
For a database connection service, you want the same instance to be used throughout the entire HttpContext to avoid losing the connection during the request processing. This ensures that the database connection remains consistent and efficient throughout the lifecycle of the request.
#####Stay tuned because in the next chapter we will start building our own DI container in JavaScript.
| emanuelgustafzon |
1,912,876 | My HNG Experience Stage One: User Management and Automation With Bash script | The HNG Internship has me on a thrilling ride! My first project is to create a Bash script to... | 0 | 2024-07-05T14:53:28 | https://dev.to/lois_oseodion_9055bdf056d/my-hng-experience-stage-one-user-management-and-automation-with-bash-script-1h0g | beginners, devops, bash, aws | The [HNG](https://hng.tech/internship) Internship has me on a thrilling ride! My first project is to create a Bash script to automate user management on a Linux server. This project showcases scripting's power and highlights the skills I'm gaining at [HNG](https://hng.tech/internship). Get ready to see how this script simplifies user and group management!
**Prerequisites and Requirements**
**Prerequisites:**
Access to a Linux environment (e.g., Ubuntu)
Basic understanding of how to run scripts and manage files in a Linux terminal
Permissions to create users, groups, and files
Requirements:
Input File Format: The script will read a text file where each line is formatted as {username; groups}.
Example:
```
kelvin; admin,dev
Hannah; dev,tester
Gift; admin,tester
```
**Script Actions:**
Create users (kelvin, Hannah, Gift) and their personal groups (admin, dev, tester).
Place users in the designated additional groups (admin, dev, tester).
Create home directories for each user with the correct permissions.
Create random passwords for each user.
Record all actions in /var/log/user_management.log.
Save passwords securely in /var/secure/user_passwords.txt.
Gracefully manage errors, such as users or groups that already exist.
**Step-by-Step Implementation**
**Step 1:**
Script Initialization and Setup
Set up the initial environment for the script, including defining file locations and creating necessary directories.
Define File Locations: Initializes paths for logging and password storage.
Create Directories: Ensures necessary directories exist.
Set File Permissions: Create and set permissions for the log and password files.
```
#!/bin/bash
# Define log and password file locations
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
# Create Directories
mkdir -p /var/log
mkdir -p /var/secure
# Create and set permissions for the log file
touch $LOG_FILE
chmod 644 $LOG_FILE
# Create and set permissions for the password file
touch $PASSWORD_FILE
chmod 600 $PASSWORD_FILE
```
**Step 2:**
**Logging Function Creation**
Create a function to log actions performed by the script with timestamps.
```
# Function to log messages with timestamps
log_action() {
echo "$(date '+%Y-%m-%d %H:%M:%S') : $1" >> $LOG_FILE
}
```
**Step 3:**
**Argument Checking**
Verify that the script is provided with the correct number of arguments.
```
# Check if a correct number of arguments is provided.
if [ $# -ne 1 ]; then
log_action "Usage: $0 <user-list-file>. Exiting."
exit 1
fi
USER_LIST_FILE=$1
# Check if user list file exists
if [ ! -f $USER_LIST_FILE ]; then
log_action "File $USER_LIST_FILE does not exist! Exiting."
exit 1
fi
```
**Step 4:**
**Reading and Processing User List**
Read each line from the user list file, extracting usernames and associated groups.
```
# Process each line in the user list file
while IFS=';' read -r username groups; do
username=$(echo $username | xargs)
groups=$(echo $groups | xargs)
# Further actions based on extracted data will be performed in subsequent steps.
done < $USER_LIST_FILE
```
**Step 5:**
**User Existence Checking and Creation**
Verify if each user already exists; if not, create the user.
```
# Check if the user already exists
if id -u $username >/dev/null 2>&1; then
log_action "User $username already exists. Skipping."
continue
fi
# Create the user if they do not exist
useradd -m $username
if [ $? -eq 0 ]; then
log_action "User $username created successfully."
else
log_action "Failed to create user $username."
continue
fi
```
**Step 6:**
**Group Handling**
Create the necessary groups for each user and assign them appropriately.
```
# Assign user to specified additional groups
IFS=',' read -ra USER_GROUPS <<< "$groups"
for group in "${USER_GROUPS[@]}"; do
group=$(echo $group | xargs)
if ! getent group $group >/dev/null; then
groupadd $group
if [ $? -eq 0 ]; then
log_action "Group $group created successfully."
else
log_action "Failed to create group $group."
continue
fi
fi
usermod -aG $group $username
log_action "User $username added to group $group."
done
```
**Step 7:**
**Home Directory Setup**
Ensure each user has a home directory set up with appropriate permissions.
```
# Set up home directory permissions
chmod 755 /home/$username
chown $username:$username /home/$username
log_action "Home directory permissions set for user $username."
```
**Step 8:**
**Password Generation and Storage**
Generate a secure password for each user and store it securely.
```
# Generate and store passwords securely
password=$(date +%s | sha256sum | base64 | head -c 12 ; echo)
echo "$username,$password" >> $PASSWORD_FILE
log_action "Password for user $username set successfully."
```
**Step 9:**
**Script Completion and Finalization**
Conclude the script execution, logging the completion of all actions.
```
# Final log entry
log_action "Script execution completed."
```
Putting It All Together
Here's the complete script:
```
#!/bin/bash
# Step 1: Define File Locations
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
# Step 2: Create Directories
mkdir -p /var/log
mkdir -p /var/secure
# Step 3: Set File Permissions
touch $PASSWORD_FILE
chmod 600 $PASSWORD_FILE
touch $LOG_FILE
chmod 644 $LOG_FILE
# Step 4: Define Logging Function
log_action() {
echo "$(date '+%Y-%m-%d %H:%M:%S') : $1" >> $LOG_FILE
}
# Step 5: Argument Checking
if [ $# -ne 1 ]; then
log_action "Usage: $0 <user-list-file>. Exiting."
exit 1
fi
USER_LIST_FILE=$1
if [ ! -f $USER_LIST_FILE ]; then
log_action "File $USER_LIST_FILE does not exist! Exiting."
exit 1
fi
# Step 6: Reading and Processing User List
while IFS=';' read -r username groups; do
username=$(echo $username | xargs)
groups=$(echo $groups | xargs)
# Step 7: User Existence Checking and Creation
if id -u $username >/dev/null 2>&1; then
log_action "User $username already exists. Skipping."
continue
fi
useradd -m $username
if [ $? -eq 0 ]; then
log_action "User $username created successfully."
else
log_action "Failed to create user $username."
continue
fi
# Step 8: Group Handling
IFS=',' read -ra USER_GROUPS <<< "$groups"
for group in "${USER_GROUPS[@]}"; do
group=$(echo $group | xargs)
if ! getent group $group >/dev/null; then
groupadd $group
if [ $? -eq 0 ]; then
log_action "Group $group created successfully."
else
log_action "Failed to create group $group."
continue
fi
fi
usermod -aG $group $username
log_action "User $username added to group $group."
done
# Step 9: Home Directory Setup
chmod 755 /home/$username
chown $username:$username /home/$username
log_action "Home directory permissions set for user $username."
# Step 10: Password Generation and Storage
password=$(date +%s | sha256sum | base64 | head -c 12 ; echo)
echo "$username,$password" >> $PASSWORD_FILE
log_action "Password for user $username set successfully."
done < $USER_LIST_FILE
# Step 11: Script Completion and Finalization
log_action "Script execution completed."
```
**Trying It Out**
Save the file as create_user.sh.
Upload it to a GitHub repository.
Clone the repository to a Linux server.
Run the script with the user list file as an argument.
The HNG project is more than just an internship; it is a transformative experience that equips participants with the skills, knowledge, and confidence needed to thrive in the fast-paced tech industry. Honestly, I am enjoying it. Thanks for taking the time to read this far. Please kindly like and leave a comment. Thank you!
| lois_oseodion_9055bdf056d |
1,904,378 | CSS: Aprendendo box model com analogias | Introdução Se você está estudando CSS, talvez já tenha se deparado com o termo box model.... | 0 | 2024-07-05T14:50:09 | https://dev.to/fhmurakami/css-aprendendo-box-model-com-analogias-5a9a | css, learning, beginners, braziliandevs | ## Introdução
Se você está estudando CSS, talvez já tenha se deparado com o termo _box model_. Caso ainda não tenha, não se preocupe, iremos abordar esse assunto nesse artigo.
Basicamente todo elemento em uma página web é um retângulo chamado de _box_ (caixa), e daí vem o nome _box model_ ou _modelo de caixa_. Entendermos como funciona esse modelo é a base para conseguirmos criar layouts mais complexos com CSS, ou alinhar itens corretamente.
![Meme GIF CSS - Peter Griffin de Family Guy tentando arrumar uma persiana](https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExZXdoa2wwOXAyNHcwa2F5MmI4dHcxa3AwYnRpc2pyaTJsMzR3cjFydiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/13FrpeVH09Zrb2/giphy.gif)
Ao inspecionar um elemento (clicando com o botão direito ou abrindo o **DevTools** com os atalhos Ctrl+Shift+C ou F12, dependendo do seu navegador), na aba _Computed_ (Calculado), você provavelmente irá ver a imagem a seguir:
<figure>
<a id="computed"></a>
![Print da aba 'calculado' do DevTools](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kf8uwcl1pijc24a2s5qv.png)
<figcaption>Fig.1 - Propriedades do elemento (aba _Computed_)</figcaption>
</figure>
No próximo tópico iremos ver detalhadamente o que significa cada parte dessa imagem.
## Estrutura básica do **box model**
Para ilustrar a estrutura básica do _box model_ irei utilizar como exemplo a construção de uma casa em um terreno. Essa ideia foi inspirada neste artigo [[1]][Ref1] (em inglês).
As partes que compõem a estrutura do _box model_ são:
### **Conteúdo** (_Content_)
O conteúdo se refere à parte mais central da [Fig.1][Fig1] em azul, e está relacionado ao conteúdo dentro de uma tag HTML, como por exemplo o texto em um parágrafo (**`<p>`**).
O conteúdo é composto basicamente por duas propriedades, largura (`width`) e altura (`height`).
No nosso exemplo, o conteúdo será a casinha abaixo ([Fig.2][Fig2]) (se você inspecionar a imagem da casa, verá que as medidas são as mesmas da [Fig.1][Fig1]). As dimensões da casa são 81px de largura e 93 px de altura.
<figure>
<a id="casa"></a>
![Casa feita em pixel art](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/04qnyrtcsnvy9dhtxpr3.png)
<figcaption>Fig. 2 - Conteúdo (casa) e suas dimensões</figcaption>
</figure>
O conteúdo precisa estar dentro de uma estrutura HTML, portanto colocaremos nossa casinha dentro de um lote para representar esta estrutura:
<figure>
<a id="casa-lote"></a>
![Casa posicionada no centro do lote](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s35or8xhu0dgqlf96hql.png)
<figcaption>Fig. 3 - Casa posicionada no centro do lote</figcaption>
</figure>
### **Preenchimento** (_Padding_)
A parte em verde da [Fig.1][Fig1] é a propriedade chamada de `padding` (preenchimento), que cria um espaço ao redor do conteúdo.
O `padding` é demonstrado pela parte de terra, onde será o jardim, por exemplo:
<figure>
<a id="padding"></a>
![Padding adicionado ao lote como uma área de terra ao redor da casa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9mqclmz37kqdgbi7wq3.png)
<figcaption>Fig. 4 - Padding adicionado como uma área de terra ao redor da casa</figcaption>
</figure>
### **Borda** (_Border_)
A seguir temos a propriedade `border`, ou borda, que é responsável por delimitar nosso conteúdo e está representada pela cor amarela na [Fig.1][Fig1]. A borda é a última propriedade do nosso elemento que pode ser vista.
A borda pode ser representada como o muro ou, no nosso caso, a cerca da casa:
<figure>
<a id="margin"></a>
![Border (Cerca) adicionada ao redor do padding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jz6dnyxrche2h91ublv.png)
<figcaption>Fig. 5 - Border (Cerca) adicionada ao redor do padding</figcaption>
</figure>
### **Margem** (_Margin_)
Por fim, temos a propriedade `margin`, em laranja ([Fig.1][Fig1]), que inclui uma área vazia ao redor do nosso elemento. Como pode ser visto na imagem:
<figure>
<a id="margin"></a>
![Margem adicionada ao elemento, criando uma área vazia ao redor da borda](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bb2tbrnugafdsga9thn5.png)
<figcaption>Fig. 6 - Margem adicionada ao elemento, criando uma área vazia ao redor da borda</figcaption>
</figure>
Neste caso, reduzimos o `padding` para que a margem pudesse ser representada na imagem.
## A propriedade **`box-sizing`**
Agora que conhecemos a estrutura do _box model_, podemos abordar a propriedade `box-sizing`. Essa propriedade nos permite dizer ao navegador como ele deve calcular a altura e a largura do elemento. Temos apenas 2 valores possíveis:
### content-box
Este é o valor padrão, em que a altura e a largura do elemento incluem apenas o conteúdo. Portanto, se tivermos um conteúdo com `height`(altura) e `width`(largura) de 100px, mais 10px de `padding`, mais 5px de `border` e 5 de `margin`, veremos que o tamanho do nosso elemento mudou de 100px para 130px:
<figure>
<a id="content-box-example"></a>
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tsf578lm7q1mnhmeqb4e.png)
<figcaption>Fig. 7 - Div com 100 px de altura e largura, 10px de padding, 5px de border e 5px de margin utilizando o valor padrão `content-box`</figcaption>
</figure>
Veja o código aqui:
{% codepen https://codepen.io/fhmurakami/pen/QWXLdpX %}
Isso ocorre pois as propriedades _`height`_ e _`width`_, são aplicadas somente ao conteúdo (a parte azul da [Fig.1][Fig1], lembra?). Porém adicionamos ainda o `padding`, a `border` e a `margin`:
100px (`height`/`width`) + 2 * 10px (`padding`) + 2 * 5px (`border`) + 2 * 5px (`margin`) = **140px**
> #### **Atenção!** :warning:
> Ué?! :thinking:
>
> A imagem mostra o elemento com 130px e não 140px!
> Exato! Lembre-se que a `margin` é uma propriedade externa (ou um espaço vazio em volta) do elemento e por isso não deve ser somada à altura e à largura do mesmo.
Podemos pensar que com a propriedade `box-sizing` com o valor `content-box`, nosso elemento vai crescendo conforme adicionamos mais "camadas".
Para explicar, utilizarei mais uma analogia: Imagine um balão, daqueles de festa de criança, com doces dentro.
<center>
<table>
<tr>
<td>
<figure>
<a id="baloon"></a>
<img alt="Balão onde iremos colocar os doces" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jx739awww0jowwoonp8o.png" />
<figcaption>Fig. 8 - Balão de festa (<code>border</code>)</figcaption>
</figure>
</td>
<td>
<figure>
<a id="candies"></a>
<img alt="Doces" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15qcojcdwti9cpc9ops0.png" />
<figcaption>Fig. 9 - Doces (<code>content</code>)</figcaption>
</figure>
</td>
</tr>
<tr>
<td colspan="2">
<figure>
<a id="baloon-candies"></a>
<img alt="Balão cortado para exibir os doces dentro" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23d4459r9hpc24dinodj.png" />
<figcaption>Fig. 10 - Balão com os doces dentro</figcaption>
</figure>
</td>
</tr>
</table>
</center>
Os doces serão o nosso conteúdo, com uma altura e largura fixos. Para que seja possível estourar o balão, vamos adicionar um `padding` (ar) dentro do balão. O balão em si, é a `border` e a `margin` é todo o espaço em volta do balão:
<figure>
<a id="baloon-full"></a>
<img alt="Balão pendurado no teto com espaço vazio ao seu redor" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/huhrywjs718o6twheanr.png" />
<figcaption>Fig. 11 - Doces (<code>content</code>), ar (<code>padding</code>), balão (<code>border</code>), espaço em volta do balão (<code>margin</code>)</figcaption>
</figure>
Viu como nosso elemento foi crescendo conforme adicionamos mais propriedades? Ou se adicionarmos mais ar (`padding`) ao balão?
Outro exemplo de `content-box` utilizando analogia: um saco de pipoca no microondas, onde temos inicialmente o `content` (milho), a `border` (saco de papel) e `margin` (espaço interno do microondas). Porém ao aquecer, lentamente um `padding`(ar/vapor dentro do saco) vai sendo adicionado.
### border-box
O outro valor possível para a propriedade `box-sizing` é o `border-box`. Ele é muito útil quando você quer ter certeza do espaço que seu elemento irá ocupar na página, pois ao invés de atribuir a altura e a largura apenas ao conteúdo, ele utiliza o elemento todo (`content + padding + border`). Isso ajuda na criação de layouts responsivos, pois garantimos os elementos terão o tamanho exato que for definido mesmo utilizando medidas relativas (`%`, `em`, `rem` etc.).
Utilizando o mesmo exemplo da [Fig.7][Fig7], porém adicionando a propriedade `box-sizing: border-box;` teremos um elemento final com os 100px de altura e largura como definimos anteriormente.
<figure>
<a id="border-box-example"></a>
<img alt="Div com 100 px de altura e largura, 10px de padding, 5px de border e 5px de margin, porém desta vez utilizando o valor border-box" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ga0e6zyju1aqknvc71vy.png"/>
<figcaption> Fig. 12 - Div com 100 px de altura e largura, 10px de padding, 5px de border e 5px de margin, porém desta vez utilizando o valor <code>border-box</code> </figcaption>
</figure>
A diferença é que agora nosso conteúdo reduziu para 70px de altura e largura para não ultrapassar os 100px no total.
<figure>
<a id="computed-border-box"></a>
<img alt="Imagem da aba computed mostrando que o conteúdo foi reduzido para 70px de altura e largura" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn7pkcr45w7nup242uoi.png" />
<figcaption> Fig. 13 - Aba <code>computed</code> ao inspecionar o elemento </figcaption>
</figure>
Vejo o código aqui:
{% codepen https://codepen.io/fhmurakami/pen/wvLwggR %}
Neste caso, devemos pensar em algo em que as medidas finais não possam ultrapassar um determinado tamanho. Para isso utilizaremos uma caixa térmica como o tamanho máximo, sendo assim, a caixa representará a `borda` do nosso elemento:
<figure>
<a id="cooler-vazio"></a>
![Caixa térmica vermelha com borda branca e sem tampa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m9whp64mx7skktnnryns.png)
<figcaption>
Fig. 14 - Caixa térmica (`border`)
</figcaption>
</figure>
O `conteúdo` será a bebida que queremos gelar, e o gelo é o `padding`:
<center>
<table>
<tr>
<td>
<figure>
<a id="barril-chopp"></a>
<img alt="Barril de chopp" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rq41pe25twoxpbzli7g2.png" />
<figcaption>Fig. 15 - Barril de chopp (<code>content</code>)</figcaption>
</figure>
</td>
<td>
<figure>
<a id="gelo"></a>
<img alt="Cubos de gelo" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3soshxk0pfw00i2iju2a.png" />
<figcaption>Fig. 16 - Cubos de gelo (<code>padding</code>)</figcaption>
</figure>
</td>
</tr>
<tr>
<td colspan="2">
<figure>
<a id="cooler-cheio"></a>
<img alt="Caixa térmica com a bebida e o gelo" src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7rpvts1fh4n098sntuw7.png" />
<figcaption>Fig. 17 - Caixa térmica com a bebida e o gelo</figcaption>
</figure>
</td>
</tr>
</table>
</center>
Perceba que quanto mais gelo colocarmos, menor será a bebida que conseguiremos gelar. Como não podemos ter um conteúdo com medidas negativas, o menor tamanho possível será de 0px. Porém, supondo que nossa caixa tenha 100px de `altura` por 120px de `largura` , e definirmos um `padding` de 60px, ao todo teremos 120px de gelo tanto na horizontal quanto na vertical, ou seja, o gelo irá transbordar a caixa térmica.
<figure>
<a id="padding-overflow"></a>
![Caixa térmica vermelha com gelo ultrapassando a altura máxima da caixa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v4qnxgw0piducwagqdl.png)
<figcaption>
Fig. 18 - Gelo transbordando a o limite máximo (altura) da caixa térmica
</figcaption>
</figure>
O mesmo acontece com nosso elemento HTML:
<figure>
<a id="border-box-overflow"></a>
![Imagem da aba computed mostrando que o conteúdo foi reduzido para 0px de altura e largura, mas a altura total do elemento aumentou para 150px](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uzc82a908v1a80mhoe3.png)
<figcaption>
Fig. 19 - Inspecionando o elemento é possível ver que, ao adicionar um padding maior do que o tamanho total do elemento, o conteúdo foi reduzido para 0x0 px e a altura aumentou para 150px, mesmo com o `box-sizing: border-box`
</figcaption>
</figure>
Vejo o código aqui:
{% codepen https://codepen.io/fhmurakami/pen/mdZbJZp %}
## Conclusão
Agora que você já conhece a estrutura básica do _Box Model_, e a propriedade `box-sizing`, ficará mais fácil entender como os elementos se comportam na sua página web e saber quando usar cada um dos valores (`content-box` e `border-box`). Se você inspecionar os elementos dos sites que utiliza no dia-a-dia, verá que a maioria utiliza `border-box` para seus elementos, pois essa propriedade tem facilitado muito o design responsivo. :)
Parabéns por ter chegado até aqui!
![Canecas de cerveja brindando](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lsjnxtd5bpk1hyen7iqq.png)
Ficou com alguma dúvida? Tem alguma sugestão? Fique a vontade para deixar seu comentário, ou se preferir, mande uma mensagem privada no [Linkedin](https://www.linkedin.com/in/felipe-murakami/).
> ### **Observação** ❗🚫
> Todas as imagens deste artigo foram feitas por mim, favor não utilizá-las sem o devido consentimento/crédito.
> Os assets prontos que utilizei estão nas referências, e o uso para projetos não-comerciais é permitido.
## Referências
<a id="ref1"></a>[1] [The CSS Box Model Explained by Living in a Boring Suburban Neighborhood](https://blog.codeanalogies.com/2017/03/27/the-css-box-model-explained-by-living-in-a-boring-suburban-neighborhood/)
<a id="ref2"></a>[2] [MDN Web Docs - The box model](https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/The_box_model)
<a id="ref3"></a>[3] [MDN Web Docs - Introduction to the CSS basic box model](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_box_model/Introduction_to_the_CSS_box_model)
<a id="ref4"></a>[4] [MDN Web Docs - box-sizing](https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing)
<a id="ref5"></a>[5] [W3Schools - CSS Box Model](https://www.w3schools.com/css/css_boxmodel.asp)
<a id="ref6"></a>[6] [W3Schools - CSS Box Sizing](https://www.w3schools.com/css/css3_box-sizing.asp)
<a id="ref7"></a>[7] [The Odin Project - Foundations Course - The Box Model](https://www.theodinproject.com/lessons/foundations-the-box-model#introduction)
<a id="ref8"></a>[8] [Learn CSS BOX MODEL - With Real World Examples](https://www.youtube.com/watch?v=nSst4-WbEZk)
<a id="ref9"></a>[9] [Learn CSS Box Model in 8 minutes](https://www.youtube.com/watch?v=rIO5326FgPEhttps://www.youtube.com/watch?v=rIO5326FgPE)
<a id="ref10"></a>[10] [box-sizing: border-box (EASY!)](https://www.youtube.com/watch?v=HdZHcFWcAd8)
<a id="ref11"></a>[11] [Assets utilizados nas imagens para os exemplos de Box Model](https://butterymilk.itch.io/tiny-wonder-farm-asset-pack)
<a id="ref12"></a>[12] [Assets - Caneca de cerveja](https://henrysoftware.itch.io/godot-pixel-food)
[Fig1]: #computed
[Fig2]: #casa
[Fig3]: #casa-lote
[Fig4]: #padding
[Fig5]: #border
[Fig6]: #margin
[Fig7]: #content-box-example
[Fig8]: #baloon
[Fig9]: #candies
[Fig10]: #baloon-candies
[Fig11]: #baloon-full
[Fig12]: #border-box-example
[Fig13]: #computed-border-box
[Fig14]: #cooler-vazio
[Fig15]: #barril-chopp
[Fig16]: #border-box-overflow
[Fig17]: #cooler-cheio
[Fig18]: #padding-overflow
[Fig19]: #border-box-overflow
[Ref1]: #ref1 | fhmurakami |
1,912,874 | How to Create Responsive Card Slider in HTML CSS & JavaScript | You may have seen card or image sliders on different websites, but have you ever thought about... | 0 | 2024-07-05T14:49:39 | https://www.codingnepalweb.com/create-responsive-card-slider-html-javascript/ | webdev, javascript, html, css |
![How to Create Responsive Card Slider in HTML CSS & JavaScript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mh1dcmhc6qfo5dthnpa.png)
You may have seen card or image sliders on different [websites](https://www.codingnepalweb.com/category/website-design/), but have you ever thought about creating your own? Creating this kind of slider is straightforward, especially with SwiperJS — a leading library for modern, touch-friendly, responsive sliders.
In this blog post, I’ll guide you through creating a Responsive Card Slider using [HTML, CSS](https://www.codingnepalweb.com/category/html-and-css/), and [JavaScript](https://www.codingnepalweb.com/category/javascript/) (SwiperJS) and how to make it attractive with the glassmorphism effect. By the end of this post, you’ll have a visually appealing and interactive slider for your website projects or portfolio.
If you prefer not to use SwiperJS and want to create a slider with vanilla JavaScript, check out this blog post on [Responsive Image Slider in HTML CSS & JavaScript](https://www.codingnepalweb.com/responsive-image-slider-html-css-javascript/). Creating the slider with vanilla JavaScript can help you understand the underlying mechanisms of sliders and enhance your JavaScript skills.
## Video Tutorial of Responsive Card Slider in HTML CSS & JavaScript
{% embed https://www.youtube.com/watch?v=XxG7vqFecR8 %}
If you prefer video tutorials, the YouTube video above is a great resource. It explains each line of code and provides comments to make the process of creating your [card slider](https://www.codingnepalweb.com/draggable-card-slider-html-css-javascript/) project easy to follow. If you prefer reading or need a step-by-step guide, keep following this post.
## Steps to Create Responsive Card Slider in HTML & JavaScript
To create a responsive card slider using HTML, CSS, and JavaScript (SwiperJS), follow these simple step-by-step instructions:
- Create a folder with any name you like, e.g., card-slider.
- Inside it, create the necessary files: `index.html`, `style.css`, and `script.js`.
- Download the [Images ](https://codingnepalweb.com/custom-projects/card-slider-images-24-07-05.zip) folder and put it in your project directory. This folder contains all the images you’ll need for this card slider. Alternatively, you can also use your images.
In your `index.html` file, add the essential HTML markup with different semantic tags and SwiperJS CDN links to create the card slider layout.
```html
<!DOCTYPE html>
<!-- Coding By CodingNepal - www.codingnepalweb.com -->
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Card Slider HTML and CSS | CodingNepal</title>
<!-- Linking SwiperJS CSS -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.css">
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container swiper">
<div class="slider-wrapper">
<div class="card-list swiper-wrapper">
<div class="card-item swiper-slide">
<img src="images/img-1.jpg" alt="User Image" class="user-image">
<h2 class="user-name">James Wilson</h2>
<p class="user-profession">Software Developer</p>
<button class="message-button">Message</button>
</div>
<div class="card-item swiper-slide">
<img src="images/img-2.jpg" alt="User Image" class="user-image">
<h2 class="user-name">Sarah Johnson</h2>
<p class="user-profession">Graphic Designer</p>
<button class="message-button">Message</button>
</div>
<div class="card-item swiper-slide">
<img src="images/img-3.jpg" alt="User Image" class="user-image">
<h2 class="user-name">Michael Brown</h2>
<p class="user-profession">Project Manager</p>
<button class="message-button">Message</button>
</div>
<div class="card-item swiper-slide">
<img src="images/img-4.jpg" alt="User Image" class="user-image">
<h2 class="user-name">Emily Davis</h2>
<p class="user-profession">Marketing Specialist</p>
<button class="message-button">Message</button>
</div>
<div class="card-item swiper-slide">
<img src="images/img-5.jpg" alt="User Image" class="user-image">
<h2 class="user-name">Christopher Garcia</h2>
<p class="user-profession">Data Scientist</p>
<button class="message-button">Message</button>
</div>
<div class="card-item swiper-slide">
<img src="images/img-6.jpg" alt="User Image" class="user-image">
<h2 class="user-name">Richard Wilson</h2>
<p class="user-profession">Product Designer</p>
<button class="message-button">Message</button>
</div>
</div>
<div class="swiper-pagination"></div>
<div class="swiper-slide-button swiper-button-prev"></div>
<div class="swiper-slide-button swiper-button-next"></div>
</div>
</div>
<!-- Linking SwiperJS script -->
<script src="https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.js"></script>
<!-- Linking custom script -->
<script src="script.js"></script>
</body>
</html>
```
In your `style.css` file, style your card slider, and give it a sleek and modern glassmorphism effect. Experiment with different CSS properties such as colors, fonts, and backgrounds to make your slider more attractive.
```css
/* Importing Google Font - Montserrat */
@import url('https://fonts.googleapis.com/css2?family=Montserrat:ital,wght@0,100..900;1,100..900&display=swap');
* {
margin: 0;
padding: 0;
box-sizing: border-box;
font-family: "Montserrat", sans-serif;
}
body {
display: flex;
align-items: center;
justify-content: center;
min-height: 100vh;
background: url("images/bg.jpg") #030728 no-repeat center;
}
.slider-wrapper {
overflow: hidden;
max-width: 1200px;
margin: 0 70px 55px;
}
.card-list .card-item {
height: auto;
color: #fff;
user-select: none;
padding: 35px;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
border-radius: 10px;
backdrop-filter: blur(30px);
background: rgba(255, 255, 255, 0.2);
border: 1px solid rgba(255, 255, 255, 0.5);
}
.card-list .card-item .user-image {
width: 150px;
height: 150px;
border-radius: 50%;
margin-bottom: 40px;
border: 3px solid #fff;
padding: 4px;
}
.card-list .card-item .user-profession {
font-size: 1.15rem;
color: #e3e3e3;
font-weight: 500;
margin: 14px 0 40px;
}
.card-list .card-item .message-button {
font-size: 1.25rem;
padding: 10px 35px;
color: #030728;
border-radius: 6px;
font-weight: 500;
cursor: pointer;
background: #fff;
border: 1px solid transparent;
transition: 0.2s ease;
}
.card-list .card-item .message-button:hover {
background: rgba(255, 255, 255, 0.1);
border: 1px solid #fff;
color: #fff;
}
.slider-wrapper .swiper-pagination-bullet {
background: #fff;
height: 13px;
width: 13px;
opacity: 0.5;
}
.slider-wrapper .swiper-pagination-bullet-active {
opacity: 1;
}
.slider-wrapper .swiper-slide-button {
color: #fff;
margin-top: -55px;
transition: 0.2s ease;
}
.slider-wrapper .swiper-slide-button:hover {
color: #4658ff;
}
@media (max-width: 768px) {
.slider-wrapper {
margin: 0 10px 40px;
}
.slider-wrapper .swiper-slide-button {
display: none;
}
}
```
In your `script.js` file, add JavaScript code to initialize SwiperJS, and make your card slider functional.
```javascript
const swiper = new Swiper('.slider-wrapper', {
loop: true,
grabCursor: true,
spaceBetween: 30,
// Pagination bullets
pagination: {
el: '.swiper-pagination',
clickable: true,
dynamicBullets: true
},
// Navigation arrows
navigation: {
nextEl: '.swiper-button-next',
prevEl: '.swiper-button-prev',
},
// Responsive breakpoints
breakpoints: {
0: {
slidesPerView: 1
},
768: {
slidesPerView: 2
},
1024: {
slidesPerView: 3
}
}
});
```
That’s it! If you’ve added the code correctly, you’re ready to see your card slider. Open the `index.html` file in your preferred browser to view the slider in action.
## Conclusion and final words
By following these steps, you have created a responsive card slider using HTML, CSS, and JavaScript (SwiperJS). This slider project not only helps you understand the basics of web development but also demonstrates the power of using libraries to create interactive and useful web components easily.
Feel free to customize the slider and experiment with different settings to make it your own. For more customization details, you can check out the [SwiperJS](https://swiperjs.com/get-started) documentation.
For additional inspiration, check out my blog post on [10+ Best Image Sliders with Source Codes](https://www.codingnepalweb.com/best-javascript-image-sliders-with-code/). Among these sliders, some utilize SwiperJS, some use Owl Carousel, and others are created with vanilla JavaScript, providing a wide range of examples to learn from.
If you encounter any problems while creating your slider, you can download the source code files for this project for free by clicking the “Download” button. You can also view a live demo of it by clicking the “View Live” button.
[View Live Demo](https://www.codingnepalweb.com/demos/create-responsive-card-slider-html-javascript/)
[Download Code Files](https://www.codingnepalweb.com/create-responsive-card-slider-html-javascript/) | codingnepal |
1,911,986 | Track your performance using Habitica, Timestream and Grafana | Habitica is an excellent tool for keeping yourself sticking to what you have planned for yourself.... | 0 | 2024-07-05T14:44:26 | https://pabis.eu/blog/2024-07-05-Track-Habitica-Timestream-Grafana.html | habitica, grafana, lambda, timestream |
Habitica is an excellent tool for keeping yourself sticking to what you have planned for yourself. Although it doesn't work for everyone, it is nevertheless worth trying. (This is my third try in using Habitica and it seems to be working with the longest streak of 949 days.) As discussed in the previous post, Habitica has an API that lets you retrieve data from the app and perform actions. As a warmup, [we created a simple Lambda script](https://pabis.eu/blog/2024-06-30-Habitica-Item-Seller-Lambda-SAM.html) that sells excess items in our Habitica inventory. Today, we are going to create a similar scheduled Lambda function that will collect statistics of our profiles and store them in Amazon Timestream database. Later we will be able to view the data using Grafana.
Creating Timestream database and table
--------------------------------------
This is a very simple step. In the same `template.yaml` [file as before](https://pabis.eu/blog/2024-06-30-Habitica-Item-Seller-Lambda-SAM.html), place new resources for Timestream. It is a standard CloudFormation resource and doesn't take too many parameters. I decided to set the memory storage to 7 days as our function will not collect so much data so it won't cost you a lot.
```yaml
# Database and table for collecting Habitica statistics
HabiticaStatsTimestream:
Type: AWS::Timestream::Database
Properties:
DatabaseName: HabiticaStats
HabiticaStatsTable:
Type: AWS::Timestream::Table
Properties:
TableName: HabiticaStatsTable
DatabaseName: !Ref HabiticaStatsTimestream
RetentionProperties:
MemoryStoreRetentionPeriodInHours: 168
MagneticStoreRetentionPeriodInDays: 365
```
New Lambda function
-------------------
I will base the new function on the previous work. I will copy `auth.py` and duplicate the template from the item seller. The new code will be in a new directory called `collect-stats`. First file `actions.py` will be used for API calls to Habitica. We will first define some constants and import `requests`. By default Habitica returns a lot of data for `stats` fields so we will filter out some of them.
```python
import requests
HABITICA_URL="https://habitica.com/api/v3"
STATS_TO_GET = ['gp', 'exp', 'mp', 'lvl', 'hp']
```
Now we can construct the function that will do the API call and return the statistics we want to track. It will also return username that we will use later as a dimension. It is not required but it is nice to have.
```python
def get_stats(headers: dict) -> tuple[str, dict]:
url = f"{HABITICA_URL}/user?userFields=stats"
response = requests.get(url, headers=headers)
code = response.status_code
if code == 200:
user = response.json()['data']['auth']['local']['username']
stats = response.json()['data']['stats']
stats = {k: v for k, v in stats.items() if k in STATS_TO_GET}
return user, stats
raise Exception(response.json()['message'])
```
Next function is also very simple. It will just construct appropriate objects to store in the Timestream table. We will use standard `boto3` library and iterate through all the measurements we want to save.
```python
import boto3
def store_in_timestream(database: str, table: str, timestamp: float, username: str, stats: dict):
dimensions = [ {'Name': 'username', 'Value': username} ]
client = boto3.client('timestream-write')
records = [
{
'Dimensions': dimensions,
'MeasureName': stat,
'MeasureValue': str(value),
'MeasureValueType': 'DOUBLE',
'Time': str(int(timestamp * 1000)),
'TimeUnit': 'MILLISECONDS',
}
for stat, value in stats.items()
]
client.write_records(DatabaseName=database, TableName=table, Records=records)
```
Now it's time to create `main.py` where we will glue together the two functions. It will retrieve all the needed configuration from environment variables, such as Timestream parameters and API key location. My function in the repository also has some logging so that I can see errors.
```python
from actions import get_stats
from store import store_in_timestream
from auth import get_headers
import os, datetime
HEADERS = get_headers()
DATABASE_NAME = os.getenv('DATABASE_NAME')
TABLE_NAME = os.getenv('TABLE_NAME')
def lambda_handler(event, context):
current_utc = datetime.datetime.now(datetime.UTC)
current_utc_string = current_utc.strftime("%Y-%m-%d %H:%M:%S %Z") # Such as 2024-07-03 21:10:05 UTC
try:
username, stats = get_stats(HEADERS)
store_in_timestream(DATABASE_NAME, TABLE_NAME, current_utc.timestamp(), username, stats)
except Exception as e:
return {
"statusCode": 500, # Mark as error
"body": f"{current_utc_string}: {str(e)}"
}
return {
"statusCode": 200, # Success
"body": f"Collected statistics for time {current_utc_string}."
}
```
Adding the function to SAM
--------------------------
I decided that my function will collect the data every 30 minutes. I don't think that anything more granular is necessary and 30 minutes gives smooth enough graph. In the same template as before, we need to reference the secret stored in [Secrets Manager](https://pabis.eu/blog/2024-06-30-Habitica-Item-Seller-Lambda-SAM.html) as well as the Timestream database and table.
```yaml
HabiticaCollectStats:
Type: AWS::Serverless::Function
Properties:
CodeUri: collect_stats/
Handler: main.lambda_handler
Runtime: python3.12
Architectures:
- arm64
Policies:
- AWSSecretsManagerGetSecretValuePolicy:
SecretArn: !Ref HabiticaSecret
Environment:
Variables:
HABITICA_SECRET: !Ref HabiticaSecret
DATABASE_NAME: !Ref HabiticaStatsTimestream
TABLE_NAME: !Select [ "1", !Split [ "|", !Ref HabiticaStatsTable ] ]
Events:
Schedule30Min:
Type: Schedule
Properties:
Schedule: cron(*/30 * * * ? *)
Enabled: true
```
Because CloudFormation returns reference to the table as `Database|Table`, we
unfortunately have to split it and `!Select` the second element. However, this is not the end! If we now wait for the function to run, we will see something unexpected.
```
Error collecting stats at 2024-06-30 19:45:14 UTC: An error occurred (AccessDeniedException) when calling the DescribeEndpoints operation: User: arn:aws:sts::1234567890:assumed-role/habitica-item-seller-HabiticaCollectStatsRole... is not authorized to perform: timestream:DescribeEndpoints
```
SAM doesn't have policy templates for Timestream, similar to the one we use for Secrets Manager. However, we can define an inline policy directly. In the `Policies` section add the following using a `Statement` object.
```yaml
Policies:
- AWSSecretsManagerGetSecretValuePolicy:
SecretArn: !Ref HabiticaSecret
- Statement:
- Effect: Allow
Action:
- timestream:WriteRecords
- timestream:DescribeTable
Resource: !GetAtt HabiticaStatsTable.Arn
- Effect: Allow
Action:
- timestream:DescribeEndpoints
Resource: "*"
```
In the directory you have `template.yaml` run `sam build`. If this is the first time running SAM, use `sam deploy --guided` to deploy the stack. Otherwise just `sam deploy` is sufficient.
All updates to the Lambda side of the project can be found in the new `v2` tag in here: [https://github.com/ppabis/habitica-item-seller/tree/v2](https://github.com/ppabis/habitica-item-seller/tree/v2).
Querying Timestream for collected records
-----------------------------------------
If everything looks correct in CloudWatch logs and there are no errors in the Monitoring tab of the Lambda function, you can peek into the Timestream database. In the Timestream console, select `Tables` in the left pane, click on your table (verify if you are in the right region) and in the top-right `Actions` select `Query`. Run the following example query. Depending on your schedule, you should see some records.
![Query table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gf5t2o104b6h4oj3us5o.jpg)
```sql
SELECT * FROM "HabiticaStats"."HabiticaStatsTable" WHERE time between ago(3h) and now() ORDER BY time DESC LIMIT 10
```
Configuring Grafana
-------------------
I created a Terraform repository for spawning Grafana on EC2 instance. It is easier to manage files than in CloudFormation. I will just install Docker and `docker-compose` on the instance and run Grafana. Because it will have authentication it is smart to have any means of encryption so I will put it behind Nginx with self-signed certificate. You can risk using plaintext, or use VPN, SSH tunnel or ALB alternatively.
But let's start with the configuration. I will skip VPC creation part, providers and jump directly to the interesting parts. For the complete project follow [this link](https://github.com/ppabis/grafana-ec2-docker/tree/main). First, we will create Grafana config that will set up our admin user and password. It will also allow any origin to connect as I did not configure any domains.
```ini
[server]
enforce_domain = false
[security]
disable_initial_admin_creation = false
admin_user = secretadmin
admin_password = PassWord-100-200
cookie_secure = true
cookie_samesite = none
```
Next up is Nginx configuration. This is just a simple reverse proxy with TLS. You can also set up HTTP to HTTPS redirection. The host name in `proxy_pass` needs to match the name of the container in `docker-compose.yml` which we will define next. We will handle certificates in a later stage.
```nginx
server {
listen 443 ssl;
server_name _;
ssl_certificate /etc/nginx/ssl/selfsigned.crt;
ssl_certificate_key /etc/nginx/ssl/selfsigned.key;
location / {
proxy_pass http://grafana:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
```
Next we will create Docker compose file. It will create a common network for the two containers, mount the configuration and data directories and expose Nginx's port so that it is accessible from the Internet. For Grafana, it is also important to set the user and group that will be able to write to the directory. The environment variables will be picked up from user data script at the later stage.
```yaml
---
networks:
habitica-stats:
driver: bridge
services:
grafana:
image: grafana/grafana:latest
volumes:
- /opt/grafana:/var/lib/grafana
- /opt/grafana.ini:/etc/grafana/grafana.ini
user: "${RUN_AS_ID}:${RUN_AS_GROUP}"
networks:
- habitica-stats
restart: always
nginx:
image: nginx:latest
volumes:
- /opt/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- /opt/nginx/selfsigned.crt:/etc/nginx/ssl/selfsigned.crt
- /opt/nginx/selfsigned.key:/etc/nginx/ssl/selfsigned.key
ports:
- 443:443
networks:
- habitica-stats
restart: always
```
All of the above files are small and can be stored in SSM Parameter Store. It is more convenient than copying them over through S3. In Terraform I defined three parameters. I suggest keeping the Grafana configuration as `SecureString` as it contains the password.
```terraform
resource "aws_ssm_parameter" "grafana_ini" {
name = "/habitica-stats/grafana/ini"
type = "SecureString"
value = file("./resources/sample.ini")
}
resource "aws_ssm_parameter" "nginx_conf" {
name = "/habitica-stats/nginx/conf"
type = "String"
value = file("./resources/nginx.conf")
}
resource "aws_ssm_parameter" "docker_compose" {
name = "/habitica-stats/docker_compose"
type = "String"
value = file("./resources/docker-compose.yml")
}
```
Now, as we know the names of the parameters we can define our user data file. It will run as we create the instance and download all the parameters to their respective files. Important to note is that this instance **requires** IPv4 connection to the Internet. I tried using just IPv6 and it was shaving a yak to copy over everything with S3 or other means. GitHub doesn't support IPv6. Grafana plugins are not accessible through IPv6. So either give your instance a public IPv4 address or use NAT Gateway.
### The script below will do the following:
- Install Docker and `docker-compose`
- Create configuration for Grafana
- Create configuration for Nginx including self-signed SSL
- Start the containers using compose file
```bash
#!/bin/bash
# Docker and docker-compose installation
yum install -y docker
systemctl enable --now docker
curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/v2.28.1/docker-compose-linux-aarch64"
chmod +x /usr/local/bin/docker-compose
# Grafana configuration
useradd -r -s /sbin/nologin grafana
mkdir -p /opt/grafana
chown -R grafana:grafana /opt/grafana
aws ssm get-parameter --name ${param_grafana_ini} --with-decryption --query Parameter.Value --output text > /opt/grafana.ini
chown grafana:grafana /opt/grafana.ini
# Nginx configuration
mkdir -p /opt/nginx
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /opt/nginx/selfsigned.key -out /opt/nginx/selfsigned.crt -subj "/CN=grafana"
aws ssm get-parameter --name ${param_nginx_conf} --query Parameter.Value --output text > /opt/nginx/nginx.conf
# Run the stack
# Docker compose picks up environment variables just as shell scripts
export RUN_AS_ID=$(id -u grafana)
export RUN_AS_GROUP=$(id -g grafana)
aws ssm get-parameter --name ${param_docker_compose} --query Parameter.Value --output text > /opt/docker-compose.yml
docker-compose -f /opt/docker-compose.yml up -d
```
IAM profile for the instance
----------------------------
In order to allow Grafana to read from Timestream, we will need to grant permissions to the EC2 instance. I attached two policies for simplicity: `AmazonSSMManagedInstanceCore` which will allow access to SSM Parameters and in case we need to debug to SSM Session Manager, and `AmazonTimestreamReadOnlyAccess` that allows Grafana to not only query the database but also list databases and tables that will make the UI usable for configuration. You can adapt it to your needs by importing exporting the outputs from SAM template and importing them in Terraform using data provider `aws_cloudformation_export`.
```terraform
resource "aws_iam_instance_profile" "grafana_ec2_profile" {
name = "grafana_ec2-profile"
role = aws_iam_role.grafana_ec2_role.name
}
resource "aws_iam_role" "grafana_ec2_role" {
name = "grafana_ec2-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "ec2.amazonaws.com" }
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "ssm" {
role = aws_iam_role.grafana_ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
resource "aws_iam_role_policy_attachment" "timestream" {
role = aws_iam_role.grafana_ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonTimestreamReadOnlyAccess"
}
```
Creating new instance
---------------------
For the instance we also need a security group. I will allow HTTPS traffic from any IPv6 address and from my home IPv4 address on port 443. My AMI choice is standard Amazon Linux 2023 and I will use a Graviton instance as it's cheaper. If you want to keep it in your free tier, use `t2.micro` or `t2.nano`.
```terraform
resource "aws_security_group" "grafana_sg" {
vpc_id = aws_vpc.vpc.id
name = "grafana-sg"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
# This will allow anyone to acecss via HTTPS when using IPv6 or just your IPv4
ipv6_cidr_blocks = ["::/0"]
cidr_blocks = ["12.13.14.15/32"]
}
}
```
For IPv4 I will associate a public IP address. If you have a running NAT Gateway it might be a better idea to use it instead. The user data script is a template that should be filled with the names of the SSM parameters we created before. I will also produce outputs of the IPs concatenated with `https://` so that I have easy access to the service.
```terraform
resource "aws_instance" "grafana" {
ami = data.aws_ssm_parameter.al2023.value
instance_type = "t4g.micro"
vpc_security_group_ids = [aws_security_group.grafana_sg.id]
subnet_id = aws_subnet.public.id
associate_public_ip_address = true
ipv6_address_count = 1
iam_instance_profile = aws_iam_instance_profile.grafana_ec2_profile.name
user_data = templatefile("./resources/user-data.sh", {
param_docker_compose = aws_ssm_parameter.docker_compose.name,
param_nginx_conf = aws_ssm_parameter.nginx_conf.name,
param_grafana_ini = aws_ssm_parameter.grafana_ini.name
})
tags = { Name = "grafana-ec2" }
}
output "site_ipv6" { value = "https://[${aws_instance.grafana.ipv6_addresses[0]}]" }
output "site_ipv4" { value = "https://${aws_instance.grafana.public_ip}" }
```
The completed Grafana instance project can be
[found here](https://github.com/ppabis/grafana-ec2-docker/tree/main).
It might take some time to boot and create all the containers, so leave it for
a few minutes after all the Terraform processes are done.
```bash
$ tofu init
$ tofu apply
```
Configuring Grafana
-------------------
Now the final part is configuring Grafana connections and dashboards. Go to the address that you got as an output. The certificate is self-signed so you will get a warning. Accept it and log in with the credentials you set up in `grafana.ini`.
![Self-Signed warning](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csdee2lqi79bt5w6s296.jpg)
Select `Connections` -> `Add new connection`. Search for Timestream. Install it in the top-right and again in the same spot click `Add new data source`.
![Add new connection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9hz5s6q4yehjgpbwat04.jpg)
The first thing you have to select is a region where your Timestream database is. Then you should be able to select the database and table from the dropdown.
![Timestream data source](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7a4mer77ykuqfwyxzh4.jpg)
Once you added the data source, you can create a new dashboard. On the left hand side click `Dashboards` and create a new one. Select Timestream data source and in the query editor write such query. You can use it for every graph you create. Change the measured value on the left to choose. Click `Apply` to save the graph to the dashboard.
```sql
SELECT * FROM $__database.$__table WHERE measure_name = '$__measure'
```
![Creating dashboard in Grafana](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihkn1gos597h2nu0enpy.gif)
[*View better quality video here*](https://pabis.eu/assets/videos/grafana-habitica.mp4)
![Example dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjc047smim9qg7z4vxcn.jpg)
You can then create a dashboard that will track your habits and tasks progress as an overall overview. You can also create a graph that will compare how well do your mornings look like (how much exp do you gain between 6 and 10 AM). To do this, set your graph type to bar chart and put the following long query.
```sql
WITH morning_values AS (
SELECT
date_trunc('day', time) AS day,
measure_value::double AS value_at_time,
CASE
WHEN extract(hour from time) = 6 THEN 'value_at_6am'
WHEN extract(hour from time) = 10 THEN 'value_at_10am'
END AS time_period
FROM
$__database.$__table
WHERE
(extract(hour from time) = 6
OR extract(hour from time) = 10)
AND measure_name = '$__measure'
),
aggregated_values AS (
SELECT
day,
MAX(CASE WHEN time_period = 'value_at_6am' THEN value_at_time ELSE NULL END) AS value_at_6am,
MAX(CASE WHEN time_period = 'value_at_10am' THEN value_at_time ELSE NULL END) AS value_at_10am
FROM
morning_values
GROUP BY
day
)
SELECT
value_at_10am - value_at_6am,
day
FROM
aggregated_values
ORDER BY
day ASC
```
It will do the following: first it will find all the measurements that happened at 6AM UTC or 10AM UTC (6:00-6:59 to be exact). Then it will take the largest of the values (so more likely 6:59 than 6:00) and pass it to another query that will group those values it by days. So we will have 1st July with two values, 2nd of July with two values, etc. Finally, for each day we will subtract 6AM measured value from 10AM measured value. Of course I didn't write this query myself. But I like the effect nevertheless. Because of UTC storage I expanded the window from 5AM to 12PM.
![Morning values](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fxplq0qytl2ws306lme.jpg) | ppabis |
1,912,870 | Mirzapur Season 3 Download all episodes | mirzapur season 3 download all episodes Mirzapur 3 In HD LEAKED For Free Download Mirzapur 3:... | 0 | 2024-07-05T14:42:18 | https://dev.to/banmyaccount/mirzapur-season-3-download-all-episodes-51j2 | mirzapur, mirzapurseason3 | mirzapur season 3 download all episodes
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbwlvacatvup6st795ee.png)
Mirzapur 3 In HD LEAKED For Free Download
Mirzapur 3: Mirzapur 3 is one of the most anticipated web series of the year for which OTT buffs have been waiting for so long. Ever since season 2 ended, people have been coming up with theories and possibilities on what will happen next, and finally the 3rd season is here, and it was released on July 5th midnight, IST. But a surprising turn of events, Mirzapur 3 got leaked online a few hours after it was released.
Mirzapur 3 Early Review: Ali Fazal Gives An 'Exception' Performance In Gangster Drama; Shares Richa ChadhaMirzapur 3 Early Review: Ali Fazal Gives An 'Exception' Performance In Gangster Drama; Shares Richa Chadha
Mirzapur 3 Full Cast
The third season of the crime thriller introduced us to the ensemble cast, the familiar faces which were already loved by people for two seasons. The intriguing trailer features people including Ali Fazal, Vijay Varma, Pankaj Tripathi, Shweta Tripathi Sharma, Rasika Dugal, Harshita Gaur, Priyanshu Painyulli, Anjum Sharma, Sheeba Chadha, Rajesh Tailang, and Isha Talwar. Furthermore, if reports are believed to be true Ali Fazal has accidentally confirmed the appearance of Panchayat 3's Jeetendra Kumar.
Mirzapur 3 Plot
Mirzapur 3 Consists of 10 episodes each being 45 minutes long. The plotline of the series starts with the deceased Munna Tripathi, son of Kaleen Bhaiyaa who has been ruling the Purvanchal. Now, Guddu gets the getaway to rule the area with the support of Beena Tripathi, wife of Kaleen Bhaiya. However, the trailer shows the unexpected return of Kaleen Bhaiya that shifts the power game to a new level.
Mirzapur 3 All Set To Hit OTT Screen On July 5, Here's All You Need To Know About Pankaj Tripathi StarrerMirzapur 3 All Set To Hit OTT Screen On July 5, Here's All You Need To Know About Pankaj Tripathi Starrer
Mirzapur 3 Got Leaked In HD Hours After Release
Unfortunately, hours after its release, the crime drama got leaked online in HD a few hours after it was released. The unlawful activity has shocked the cast and crew to their core.
Say No To Piracy
Pirating movies and series can have a larger impact than anybody thinks. It only causes business losses but also discourages makers and artists to give their best when it comes to entertaining people. So, it's suggested to say goodbye to the piracy once and for all for the sake of art, creativity, and hard work of everyone who is related to a project.
DISCLAIMER: Filmibeat doesn't support any form of piracy and urges its users not to indulge themselves in pirating movies and series. For the unversed, it is a punishable offense under the Copyright Act, 1957. Thus, we urge our readers to refrain themselves from downloading and sharing pirating copies of the original series'. | banmyaccount |
1,912,863 | DiscriminatorMap de Doctrine avec Api-platform | Le DiscriminatorMap de Doctrine permet une gestion efficace des entités héritées. Nous allons... | 0 | 2024-07-05T14:40:25 | https://dev.to/aratinau/discriminatormap-de-doctrine-avec-api-platform-4koa | api, symfony, doctrine, webdev | Le `DiscriminatorMap` de Doctrine permet une gestion efficace des entités héritées.
Nous allons prendre l'exemple simple d'une classe `Vehicle` pour créer deux classes qui vont en hériter : `Bike` et `Car`
qui ont des attributs différents mais auront les mêmes que `Vehicle`.
Avec Api-platform, on pourra faire un `GET /vehicles` pour avoir tous les vehicules
`GET bikes` pour tous les bikes et `GET /cars`
```php
<?php
namespace App\Entity;
use ApiPlatform\Doctrine\Orm\Filter\OrderFilter;
use ApiPlatform\Doctrine\Orm\Filter\SearchFilter;
use ApiPlatform\Metadata\ApiFilter;
use ApiPlatform\Metadata\ApiResource;
use App\Repository\VehicleRepository;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: VehicleRepository::class)]
#[ORM\InheritanceType('JOINED')]
#[ORM\DiscriminatorColumn(name: 'discr', type: 'string')]
#[ORM\DiscriminatorMap([
'car' => Car::class,
'bike' => Bike::class
])]
#[ApiResource]
#[ApiFilter(OrderFilter::class)]
class Vehicle
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
#[ORM\Column(length: 255)]
#[ApiFilter(SearchFilter::class, strategy: 'partial')]
private ?string $brand = null;
#[ORM\Column]
private ?\DateTimeImmutable $createdAt = null;
public function getId(): ?int
{
return $this->id;
}
public function getBrand(): ?string
{
return $this->brand;
}
public function setBrand(string $brand): static
{
$this->brand = $brand;
return $this;
}
public function getCreatedAt(): ?\DateTimeImmutable
{
return $this->createdAt;
}
public function setCreatedAt(\DateTimeImmutable $createdAt): static
{
$this->createdAt = $createdAt;
return $this;
}
}
```
```php
<?php
namespace App\Entity;
use ApiPlatform\Doctrine\Orm\Filter\OrderFilter;
use ApiPlatform\Metadata\ApiFilter;
use ApiPlatform\Metadata\ApiResource;
use App\Repository\BikeRepository;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: BikeRepository::class)]
#[ApiResource]
#[ApiFilter(OrderFilter::class)]
class Bike extends Vehicle
{
#[ORM\Column]
private ?bool $hasCarrier = null;
public function hasCarrier(): ?bool
{
return $this->hasCarrier;
}
public function setHasCarrier(bool $hasCarrier): static
{
$this->hasCarrier = $hasCarrier;
return $this;
}
}
```
```php
<?php
namespace App\Entity;
use ApiPlatform\Doctrine\Orm\Filter\OrderFilter;
use ApiPlatform\Metadata\ApiFilter;
use ApiPlatform\Metadata\ApiResource;
use App\Repository\CarRepository;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: CarRepository::class)]
#[ApiResource]
#[ApiFilter(OrderFilter::class)]
class Car extends Vehicle
{
#[ORM\Column]
private ?int $numberOfDoors = null;
public function getNumberOfDoors(): ?int
{
return $this->numberOfDoors;
}
public function setNumberOfDoors(int $numberOfDoors): static
{
$this->numberOfDoors = $numberOfDoors;
return $this;
}
}
```
Tout ce fait avec
```php
#[ORM\InheritanceType('JOINED')]
#[ORM\DiscriminatorColumn(name: 'discr', type: 'string')]
#[ORM\DiscriminatorMap([
'car' => Car::class,
'bike' => Bike::class
])]
```
`InheritanceType` peut également être `SINGLE_TABLE` ce qui signifie que vous n'aurez qu'une seule table pour toutes vos entités
N'oubliez pas également sur les autres classes de rajouter `extends Vehicle`
Fusée 🚀 | aratinau |
1,902,972 | What was your win this week? | 👋👋👋👋 Reflect on your week -- what's something you're proud of? All wins count -- big or small... | 0 | 2024-07-05T14:37:00 | https://dev.to/devteam/what-was-your-win-this-week-99h | weeklyretro | 👋👋👋👋
Reflect on your week -- what's something you're proud of?
All wins count -- big or small 🎉
Examples of 'wins' include:
- Getting a promotion!
- Starting a new project
- Fixing a tricky bug
- Cooking a special meal 🍳
![animated chefs kiss](https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExcTlqejJ3a2I4amtvcjBpMmFobHlrNTRkbzBsaXN6Zzlzb3ZvaWI5NCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3o7qDWp7hxhi1N8oF2/giphy.gif)
Happy Friday!
| jess |
1,912,868 | Cloud computing: CaaS vs SaaS- What's the difference- full comparison | CaaS vs SaaS - clouds clash. Good morning and welcome to your daily cloud forecast! Today, we're... | 0 | 2024-07-05T14:36:56 | https://dev.to/momciloo/cloud-computing-caas-vs-saas-whats-the-difference-full-comparison-462j | CaaS vs SaaS - clouds clash.
Good morning and welcome to your daily cloud forecast! Today, we're seeing a dramatic showdown in the digital skies with two major cloud formations taking center stage. On the west side, we have the formidable **[Content as a Service (CaaS)](https://thebcms.com/blog/content-as-a-service-caas)** cloud front, bringing with it a powerful API-driven storm, perfect for content delivery across multiple platforms. Meanwhile, from the East, the friendly but formidable **Software as a Service (SaaS)** system is moving in, promising ease of use and automatic updates for businesses of all sizes.
As these two cloud giants clash, businesses everywhere are left wondering which model will best suit their needs. Will customizable and flexible CaaS bring clear skies to your content strategy? Or will user-friendly SaaS shower your operations with simplicity and efficiency?
But wait! This isn't your typical weather report; it's the ultimate showdown in the digital skies.
In this battle of **CaaS vs SaaS**, which cloud will reign supreme? Stay tuned as we dive into this full comparison of **CaaS vs SaaS**, exploring features, use cases, and advantages. Let's get ready to weather the storm!
## What is CaaS
**Content as a Service (CaaS)** is a cloud computing service model that delivers content through APIs. CaaS allows businesses to store, manage, and deliver content across various platforms and devices without worrying about the backend infrastructure. CaaS decouples content creation and management from its presentation, enabling developers to build applications that pull content from a central repository.
### Key Features of CaaS
1. **API-driven content delivery:** Content is accessed and delivered via APIs, allowing flexibility in presentation.
2. **Centralized content management:** A single repository that acts like a [Content Hub](https://thebcms.com/blog/content-hub-guide) for managing content, which can be distributed across multiple channels.
3. **Scalability:** Easily handle varying content loads and deliver content efficiently at scale.
4. **Customization:** Developers can use any programming language or framework to build applications.
5. **Flexibility:** Content can be reused across different platforms and devices, ensuring a consistent user experience.
## What is SaaS
**Software as a Service (SaaS)** is a cloud service model where software applications are delivered over the internet on a subscription basis. Users can access and use the software without worrying about installation, maintenance, or infrastructure management. SaaS applications are hosted and managed by a service provider.
### Key Features of SaaS
1. **Accessibility:** Access applications from any device with an internet connection.
2. **Subscription model:** Pay-as-you-go pricing without the need for large upfront investments.
3. **Automatic updates:** The provider handles updates and maintenance, ensuring users always have the latest features.
4. **Scalability:** Easily add or remove users and services as needed.
5. **Security:** Providers implement security measures to protect user data.
## CaaS vs SaaS: Differences between two clouds
Since both clouds do the same, providing services, it is crucial to understand their roles in cloud computing. So, let’s start with a full comparison, exploring features, use cases, and even solutions on leveraging both approaches.
## CaaS vs SaaS- Full comparison
Let’s start the battle. I’ll try to keep the score.
### CaaS vs SaaS- Round 1: Purpose
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9a3er7x9i25as6p2sevw.jpg)
**CaaS:** Focuses on the management and delivery of content.
CaaS is designed to handle the creation, storage, and distribution of content across multiple channels, making it ideal for organizations that need to manage and deliver content flexibly and scalable.
**SaaS:** Provides software applications over the internet for various functions.
SaaS delivers complete software solutions over the internet, covering a wide range of applications, from office software to customer relationship management (CRM) tools, simplifying end-user access and use.
**Score:**
- CaaS: 1
- SaaS: 1
### Round 2: Content Delivery
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yz2p86xhn3k9qy836xan.jpg)
**CaaS**: Uses APIs to deliver content to any front-end, offering flexibility.
CaaS decouples content storage from presentation, allowing developers to use APIs to fetch and display content in any framework or device. This ensures that content is accessible and reusable across various platforms.
**SaaS**: Includes built-in user interfaces for specific applications.
SaaS applications come with integrated user interfaces tailored for specific tasks, making them easy to use but less flexible in terms of customization and content reuse.
**Score:**
- CaaS: 2
- SaaS: 1
### Round 3: Flexibility
**CaaS:** Highly flexible, enabling content reuse and repurposing across multiple channels.
CaaS supports structured content to be easily reused and adapted for different channels and contexts, providing a high degree of flexibility in content management and delivery.
- **SaaS:** Generally designed for specific functional needs without extensive content modularity.
SaaS solutions are built to address particular functional needs, offering predefined features and interfaces that may not support extensive content customization or reuse.
**Score:**
- CaaS: 3
- SaaS: 1
### Round 4: Scalability
**CaaS:** Scalable for handling large volumes of content across various platforms.
CaaS platforms are designed to manage and distribute large amounts of content efficiently, making them suitable for organizations with significant content delivery needs.
**SaaS:** Scalable for user access and software functionality.
SaaS solutions can easily scale to accommodate more users and additional features, ensuring that software performance remains robust as the user base grows.
**Score:**
- CaaS: 3
- SaaS: 2
### Round 5: Management
**CaaS:** Provides robust digital asset management (DAM), collaborative workflows, and lifecycle management.
CaaS platforms include advanced features for managing content assets, supporting collaborative content creation, and lifecycle management to streamline content operations.
**SaaS:** Simplifies access and use of software applications, with updates and maintenance handled by the provider.
SaaS providers take care of software updates and maintenance, freeing users from the burden of managing software infrastructure and allowing them to focus on using the applications.
**Score:**
- CaaS: 4
- SaaS: 3
### Round 6: Infrastructure
**CaaS:** Decouples content creation/storage from presentation, requiring a distributed data delivery platform.
CaaS architecture separates the backend content repository from the frontend presentation layer, using a distributed data delivery platform to ensure fast and efficient content access.
**SaaS:** Hosts software on the provider's servers, accessed via the internet.
SaaS applications are hosted on the provider's infrastructure, making them accessible from anywhere with an internet connection and reducing the need for local installations.
**Score:**
- CaaS: 4
- SaaS: 4
### Round 7: Software access
Works similarly to SaaS, providing all content through a single outlet.
CaaS centralizes content storage and delivery, ensuring that content is consistently available and easy to manage across different channels.
**SaaS is** simplifying software access for networks with multiple users.
SaaS's subscription model allows for easy and widespread software access, enabling all users in a network to use the software without the hassle of individual installations.
**Score:**
- CaaS: 5
- SaaS: 5
### Round 8: Speed
**CaaS:** Accelerates content delivery and management processes by centralizing content access.
With CaaS, content can be quickly updated and deployed across various channels, reducing the time and effort required to manage content.
**SaaS:** Speeds up software deployment across networks, as all computers gain access to the software program at once.
SaaS allows for rapid deployment of software across an entire organization, ensuring that all users have immediate access to the latest features and updates without delay.
**Score:**
- CaaS: 6
- SaaS: 6
As you can see, the score is even, so how to leverage both approaches in your business?
## Headless CMS: Rainbowing CaaS and SaaS
In this cloud battle, it's a tie score between CaaS and SaaS. Enter Headless CMS – the solution to leverage the best features of both models. By decoupling content management from the presentation layer, a headless CMS provides the flexibility of CaaS with the ease of use found in SaaS. This hybrid approach allows businesses to manage content centrally while delivering it seamlessly across multiple platforms and devices.
Why headless? Because it offers:
- **API-First approach:** Perfect for CaaS, enabling flexible content delivery.
- **Centralized management:** Simplifies content handling like SaaS.
- **Scalability and flexibility:** Ensures efficient content reuse and scalability across different channels.
- **Seamless integration:** Bridges various SaaS applications, providing a unified ecosystem.
### Headless CMS benefits for CaaS and SaaS
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upof6vbjhk16athoernj.jpg)
### How a Headless CMS Helps with CaaS
1. **API-First approach:** A headless CMS is inherently API-driven, aligning perfectly with the CaaS model. Content can be easily managed and delivered across various platforms using APIs.
2. **Centralized Content Hub:** [Building Content Hub with a headless CMS](https://thebcms.com/blog/build-content-hub-headless-cms) facilitating consistent content delivery across multiple channels, including web, mobile, and IoT devices.
3. **Flexibility in presentation:** Developers can use any technology stack to build applications, ensuring that content is displayed optimally across different devices and platforms.
4. **Scalability:** Easily scale [content operations](https://thebcms.com/blog/content-operations-headless-cms) to meet the demands of different channels and user bases.
### How a Headless CMS Helps with SaaS
1. **Enhanced usability:** Users can manage content without dealing with the complexities of infrastructure or backend management, aligning with the ease of use provided by SaaS.
2. **Integration with SaaS applications:** A headless CMS can integrate seamlessly with various SaaS applications (e.g., CRM, marketing automation) through APIs, providing a unified content management and delivery ecosystem.
Learn more: [How to use Headless CMS as your job board CMS](https://thebcms.com/blog/headless-cms-as-job-board-cms)
3. **Cost efficiency:** Subscription-based pricing models of headless CMS platforms offer predictable costs, similar to SaaS, making it budget-friendly for businesses.
4. **Rapid Deployment:** Quick and efficient deployment of content across various SaaS platforms, ensuring timely updates and content consistency.
## Headless CMS makes CaaS and SaaS cloud champions
**A calm after the storm**
And now for your final weather update! It looks like the anticipated storm of the **CaaS vs. SaaS** battle won't be happening after all, thanks to the power of the **Headless CMS**. This technology has come to the rescue, bringing peace and harmony to cloud services by leveraging the strengths of both CaaS and SaaS models.
The headless CMS acts as the perfect mediator, ensuring seamless and efficient content management and delivery. As an API-driven solution, it aligns perfectly with the CaaS model, allowing centralized content management and flexible delivery across multiple platforms.
For SaaS, the headless CMS enhances usability, enabling easy content management without backend complexities. It integrates smoothly with various SaaS applications, providing features for simple software development and app development.
So, thanks to the [BCMS headless CMS](https://thebcms.com/), the clouds have parted, and the skies are clear. CaaS and SaaS can now reign as cloud champions, working together to drive digital transformation and operational efficiency in cloud computing services.
| momciloo |
|
1,912,867 | Tree Mapper GoJs How to closed items onMounted | https://gojs.net/latest/samples/treeMapper.html, Dear GOJS community, I am new to this topic and I... | 0 | 2024-07-05T14:36:53 | https://dev.to/carolina_nio_7c6147697b1/tree-mapper-gojs-2151 | help | https://gojs.net/latest/samples/treeMapper.html, Dear GOJS community, I am new to this topic and I would like to know if any of you can tell me how I can make it render the collapsed nodes from the beginning and then allow me to open and close, but initially appear closed. I have tried some functions in the onmounted but it does not appear closed,
diagram.delayInitialization(() => {
const nodes = diagram.model.nodeDataArray;
for (let i = 0; i < nodes.length; i++) {
const nodeData = nodes[i]; // Access element by index
console.log("nodeData", nodeData, "group", nodeData.isGroup)
if (nodeData.isGroup && (nodeData.key === -1 || nodeData.key === -2 || nodeData.key === -3)) {
nodeData.isTreeExpanded = false;
nodes[i] = nodeData; // Update element in the original array
}
}
diagram.model.nodeDataArray = nodes; // Update the model after all changes
});
Thanks
--------
Estimada comunidad GOJS, soy nueva en este tema y me gustaría saber si alguno de ustedes puede decirme como puedo hacer para que renderice los nodos colapsados desde el inicio y que luego me permita abrir y cerrar, pero que inicialmente me aparezca cerrado. He intentado unas funciones en el onmounted pero no me aparece cerrado,
diagram.delayInitialization(() => {
const nodes = diagram.model.nodeDataArray;
for (let i = 0; i < nodes.length; i++) {
const nodeData = nodes[i]; // Access element by index
console.log("nodeData", nodeData, "group", nodeData.isGroup)
if (nodeData.isGroup && (nodeData.key === -1 || nodeData.key === -2 || nodeData.key === -3)) {
nodeData.isTreeExpanded = false;
nodes[i] = nodeData; // Update element in the original array
}
}
diagram.model.nodeDataArray = nodes; // Update the model after all changes
});
Gracias, | carolina_nio_7c6147697b1 |
1,912,866 | How to Set Up a New TypeScript Project | TypeScript is JavaScript with syntax for types. TypeScript is a strongly typed programming... | 0 | 2024-07-05T14:36:49 | https://dev.to/asimnp/how-to-set-up-a-new-typescript-project-56ae | typescript, setup, javascript, webdev | ### TypeScript is JavaScript with syntax for types.
TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale. `.ts` is the extension of TypeScript file.
### 1. Setup
- Before installing TypeScript on your system, first you need to install Node.js. You can check the [official documentation](https://nodejs.org/en/download/package-manager) for installating it.
- Type the following command to check to ensure Node.js is install. The command gives out the Node.js version.
```shell
> node --version
```
- Example:
![Node.js version check](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j67d08acz8oq5okkku9.png)
- Create a new directory and go inside it.
```shell
> mkdir code && cd code
```
- Initialize `package.json` file. Its a file which contains all the packages or dependencies install for the application.
```shell
> npm init -y
```
- Install TypeScript
```shell
> npm install typescript --save-dev
```
- `--save-dev` flag indicate we are installing TypeScript for development purpose and we don't need this package for production.
- Initialize [TypeScript config file](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html).
```shell
> npx tsc --init
```
### 2. Set Up Project
- Create new files and folders inside the `code` folder.
```shell
> touch index.html
> mkdir src dest
> touch src/main.ts
```
- Folders and files tree
![TypeScript Project Structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pc68qgmp8k3bh10wvdcb.png)
- **NOTE:** Browser can only understand JavaScript, so you need to compile the TypeScript into Javascript.
- Specify where to output the compile JavaScript. Customize the `tsconfig.json` file to output inside the `.dest` folder.
```json
// tsconfig.json
...
"outDir": "./dest", /* Specify an output folder for all emitted files. */
...
```
- The `...` three dots indicates their is more code.
- Create a script that will watch your TypeScript file changes and automatically compile it into JavaScript. We will create a new script `start` inside the `package.json` file.
```json
// package.json
{
"name": "code",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"start": "tsc --watch"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"devDependencies": {
"typescript": "^5.5.3"
}
}
```
- We will add some code inside our TypeScript file `src/main.ts`
```JavaScript
const username: string = "Jane Doe";
console.log(`Hello, ${username}!`)
```
- Now, we will run the `start` script to watch the TypeScript file changes.
```shell
> npm run start
```
- The `start` script command will automatically create a new file called `main.js` inside the `dest` folder which is our compiled JavaScript file.
- Inside our `index.html` we will link our compiled JavaScript file and open it on the browser. Then, check the console to verify the message is logged.
```HTML
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TypeScript Project Set Up</title>
</head>
<body>
<h1>Press <code>F12</code> and check your browser console. There you will see the message which we have written.</h1>
<script src="./dest/main.js"></script>
</body>
</html>
```
- Now, you can add your code and create your project! | asimnp |
1,912,865 | Elevate Your Beauty Routine at Cinnamon Salon in Mumbai | In the heart of Mumbai, Cinnamon Salon offers a sanctuary of elegance and sophistication. Whether... | 0 | 2024-07-05T14:31:55 | https://dev.to/abitamim_patel_7a906eb289/elevate-your-beauty-routine-at-cinnamon-salon-in-mumbai-15nn | saloninmumbai, bestsaloninmumbai | In the heart of Mumbai, **[Cinnamon Salon](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)** offers a sanctuary of elegance and sophistication. Whether you're a local or a visitor seeking top-tier beauty services, Cinnamon Salon promises an exceptional experience that blends luxury with expertise.
Elegant Ambiance and Professional Expertise
Walking into **[Cinnamon Salon](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)**, you're immediately enveloped in an atmosphere of calm and luxury. The salon's refined decor and serene environment set the stage for an unforgettable pampering session. Every element, from the comfortable seating to the cutting-edge equipment, reflects a commitment to excellence.
Skilled Stylists and Customized Services
At **[Cinnamon Salon](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)**, grooming is elevated to an art form by a team of skilled professionals. Each stylist brings extensive experience and a keen eye for detail, ensuring that every service is personalized to perfection. Whether you're after a chic haircut, vibrant color, or a soothing spa treatment, the stylists at Cinnamon Salon are dedicated to making you look and feel your best.
Comprehensive Range of Services
**[Cinnamon Salon](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)** offers a wide array of services to cater to all your beauty needs. From precision haircuts and innovative styling to rejuvenating facials and relaxing massages, every service is designed to enhance your natural beauty and provide a rejuvenating experience. The salon also offers specialized treatments tailored to individual preferences and needs.
Seamless Booking with Trakky
Booking an appointment at **[Cinnamon Salon](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)** is effortless with Trakky. Our platform allows you to explore available services, check timings, and book your preferred slot with ease. Whether you’re planning ahead or need a last-minute appointment, Trakky ensures that your salon experience is smooth and convenient.
Visit Cinnamon Salon Today!
Transform your beauty routine at **[Cinnamon Salon in Mumbai](https://trakky.in/Mumbai/Khandiwali%20West/salons/Cinnamon-Salon-khandiwali-west)**. Discover why discerning clients choose Cinnamon Salon for their grooming needs. Book your appointment through Trakky and enjoy a personalized, luxurious experience like no other. | abitamim_patel_7a906eb289 |
1,912,864 | Create an API in Umbraco in 5 Minutes: A Quick Guide for Developers | Why Create an API in Umbraco? APIs (Application Programming Interfaces) allow different... | 27,304 | 2024-07-05T14:26:39 | https://shekhartarare.com/Archive/2024/6/create-an-api-in-umbraco | webdev, umbraco, tutorial, api | ## Why Create an API in Umbraco?
APIs (Application Programming Interfaces) allow different software systems to communicate with each other. By creating an API in Umbraco, you can enable external applications to interact with your website’s content, offering enhanced functionality and improved user experiences.
## Prerequisites
Before we start, ensure you have the following:
- An Umbraco installation
- Visual Studio or your preferred IDE
## Step 1: Set Up Your Umbraco Project
First, ensure your Umbraco project is set up correctly. If you haven’t installed Umbraco yet, follow the steps mentioned [here](https://shekhartarare.com/Archive/2024/6/how-to-create-a-new-umbraco-project):
## Step 2: Create the API Controller
Add a New Folder called Controllers: Add a new class file and name it MyAPIController.cs.
![Create an api controller](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8kbuswyo7cf8wq51yx9i.png)
Write the API Code:
```
using Microsoft.AspNetCore.Mvc;
using Umbraco.Cms.Web.Common.Controllers;
namespace InstallingUmbracoDemo.Controllers
{
public class MyAPIController : UmbracoApiController
{
[HttpGet]
public string GetGreeting()
{
return "Hello, Umbraco API!";
}
}
}
```
## Step 3: Test Your API
We don’t need to do anything about routing. Umbraco will automatically handle that for us. Now, let’s test our API:
Open your browser and navigate to http://yourdomain.com/umbraco/api/myapi/getgreeting. You should see the message “Hello, Umbraco API!”.
![Final output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/la3kefyu2xkfm0s176xk.png)
## Conclusion
Creating an API in Umbraco is quick and easy, allowing you to expand the functionality of your website and integrate with external systems. By following the steps outlined in this guide, you can set up a basic API in just 5 minutes without the need for additional routing configuration.
For more advanced Umbraco tutorials and tips, stay tuned to our blog and feel free to leave your questions in the comments! | shekhartarare |
1,912,845 | We need beta testers, testing if the site works and stuff | my company told me to find people to test the site, so I think this part of the internet might be... | 0 | 2024-07-05T14:18:53 | https://dev.to/mtrt/we-need-beta-testers-testing-if-the-site-works-and-stuff-2djb | webdev, javascript, beginners, programming | my company told me to find people to test the site, so I think this part of the internet might be it:
# WE NEED BETA TESTERS🚨
We need people to beta test our web application 🙋🏻♂️🙋🏼♀️
Beta testers will get free tokens ($10) that refills every 24/7 after signing up.
Help us develop the product together.
https://www.producthunt.com/posts/zenqira
Thank you.
side information :
you can help invest in our CPU that works to train this AI for only $50 and you will get tokens 3 times unlimited then in one training you will get $0.35 so 3 = $1.05 so 1.05 x (days) = $7.35 x 30 (days) = $31.84 + 2 weeks = $50.2 and so on.
And the tokens will be refilled without limits, so users will have $382.2 this year alone just from training our AI and doing business with us. | mtrt |
1,912,844 | EVM Reverse Engineering Challenge 0x03 | Here I am, this time with the fourth challenge. This one it's a bit more interesting. Let me know if... | 27,871 | 2024-07-05T14:17:06 | https://www.gealber.com/evm-reverse-challenge-0x03 | evm, ethereum, reverseengineering, smartcontract | Here I am, this time with the fourth challenge. This one it's a bit more interesting. Let me know if you found it interesting as well. As always here is the address of the smart contract. Feel free to get that USDT!!
```
0x48f1F0095a91Ed8d3b788AB4c8998640cb5d4463
```
Hint: Can I call myself?
| gealber |
1,912,843 | Content as a Service: How to manage content on the cloud - CaaS guide | Whenever, wherever Content needs to be together Publish there, or publish here, CaaS is the deal,... | 0 | 2024-07-05T14:17:03 | https://dev.to/momciloo/content-as-a-service-how-to-manage-content-on-the-cloud-caas-guide-3kb7 |
Whenever, wherever
Content needs to be together
Publish there, or publish here,
CaaS is the deal, my dear
There over, hereunder
You'll never have to wonder
Whatever form you need
That's the promise of content as a service, my dear.
This Shakira haiku song describes **content as a service** and explains its essence - a cloud-based approach that provides on-demand content delivery and management.
What that means, why you should care, why it is important, and why you should consider this approach, how to choose the right CaaS model for your needs, key considerations- all answers you will find in this article.
## What is content as a service
Content as a Service (CaaS) is a cloud-based content management model where organized content, in one place ([Content Hub](https://thebcms.com/blog/content-hub-guide)), can deliver that content to an unlimited number of frontends, such as the website, mobile apps, online stores, or "smart" devices.
### But, why “as service”?
The "as a service" model in cloud computing, including CaaS, builds on the following concept: providing tools, resources, and expertise as a service rather than a product you need to manage yourself.
In legacy CMSs, businesses would often need to invest heavily in infrastructure, software, and personnel to create and manage their content. With CaaS, this paradigm shifts towards a more service-oriented approach.
**Simply put, work is done for you, not by you.**
To understand that, let’s use music as an example one more time. (This time no Shakira). The evolution of the music went from product to service.
Music transitioned from being a physical product (records, tapes, CDs) to a digital product (MP3s, iTunes) and is now a service (Spotify, Apple Music).
This shift to music as a service provides benefits such as instant access to vast libraries of music, personalized recommendations, and the convenience of streaming from any device, anywhere.
Further, Spotify uses its data as a service. By leveraging extensive user data, Spotify can offer personalized playlists, recommendations, and insights, enhancing UX and engagement.
This approach not only improves user satisfaction but also drives business growth by tailoring content to individual preferences and behaviors.
Le-do-lo-le-lo-le, le-do-lo-le-lo-le.
## Content as a service benefits
Here are some key reasons why the "as a service" model is attractive, especially in the context of CaaS:
1. **Reduced infrastructure costs**: With CaaS, you don’t need to invest in an extensive hardware and software infrastructure. The service provider handles all that, saving you money.
2. **Accessibility**: CaaS ensures your content is accessible across various devices, enhancing user experience and reach.
3. **Omnichannel content publishing**: Content can be seamlessly published across multiple channels, from websites and mobiles to social media platforms, ensuring consistent and widespread distribution.
4. **Mobile-friendly:** CaaS platforms ensure that content is optimized for mobile devices, improving engagement and usability.
5. **Affordable models**: Pay-as-you-go or subscription models make CaaS financially accessible, allowing businesses to scale their usage based on need.
6. **Connectivity**: Cloud-based services offer robust connectivity and real-time communication capabilities, essential for modern content management.
7. **Scalability and flexibility**: CaaS platforms can scale resources up or down based on demand, providing unparalleled flexibility
## What are the differences between Content as a Service and traditional content management
Unlike [WordPress](https://thebcms.com/compare/wordpress-alternative) or Drupal, to manage content, the Content as a service strategy follows the following approaches:
### Structured content vs. page-based templates
Structured content and page-based templates represent two distinct approaches to content management. WordPress, for example, often relies on page-based templates, which dictate how content should be organized and presented, which limits content to specific formats such as blogs. This approach can restrict the flexibility of content delivery.
On the other hand, CaaS treats content as modular data that can be easily reused. CaaS breaks content down into reusable blocks, rather than large page templates, shifting the content infrastructure from a page-centric to a content-centric architecture. This allows for more dynamic and versatile content delivery.
### Coupled vs decoupled architecture
Coupled architecture tightly binds the frontend presentation layer with the backend content repository, limiting content delivery to a single channel and often resulting in less flexibility.
In contrast, CaaS supports a decoupled, or even headless, architecture that separates the frontend from the backend. This separation allows content to be delivered to any channel. By isolating content storage and delivery from content presentation, CaaS simplifies the CMS architecture, ensuring that each component performs its specific function without dependency on others.
Learn more about headless: [All you need to know: Headless CMS](https://thebcms.com/blog/headless-cms-101)
### On-Premise vs. Cloud
Simply put, the difference between on-premise and cloud software is location.
An organization that uses on-premise software must handle the security, maintenance, updates, and scalability of its hardware infrastructure.
As a subset of the Software as a Service (SaaS) model, Content as a Service shifts content storage and management to the vendor's cloud. Because of this, CaaS provides a more efficient, scalable content management experience than many traditional CMS solutions.
## How Content as a Service works
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjc6y8kcrhvr25x7yzvv.jpg)
As part of the CaaS model, the CMS manages content assets independently of how they are delivered. Content creators upload content types—such as text blocks, photos, and videos —to a shared repository.
By using APIs, developers can integrate specific types of content into websites, mobile, or other platforms, creating truly headless and highly scalable solutions.
To do so, all CaaS solutions consist of two main components:
### Shared cloud repository
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ypmkqqd38knl6e3hugn5.jpg)
The importance of a cloud-based repository lies in its ability to centralize content management, offering scalability, accessibility, and efficiency. By storing assets in the cloud, organizations can easily distribute content globally, ensure real-time updates, and reduce the burden of maintaining on-premises infrastructure, ultimately leading to more agile and responsive content delivery.
### API-first approach to content distribution
An API-first approach provides developers with access to content through high-performance APIs, ensuring integration and retrieval of content. It offers flexibility in the choice of front-end and content delivery frameworks, allowing developers to select the best tools for their needs.
Want to see examples? Check out:
- [Next CMS](https://thebcms.com/nextjs-cms)
- [Gatsby CMS](https://thebcms.com/gatsby-cms)
- [Nuxt CMS](https://thebcms.com/nuxt-cms)
CaaS method guarantees the fastest possible speed for end users, enhancing their experience. The setup eliminates the need for connectors and optimizes performance and scalability by providing direct access to content at the distribution edge.
But wait, are Caas and SaaS the same thing? 🤔
## How CaaS compares with SaaS
Even though both CaaS and SaaS are cloud-based models, they serve different purposes. SaaS provides software applications over the Internet, allowing users to access and use software without worrying about installation, maintenance, or infrastructure.
Delivered or licensed through an online subscription, SaaS simplifies software access for networks with multiple users, eliminating the need for installing software on individual computers.
On the other hand, CaaS focuses specifically on content management and delivery. Just as SaaS delivers software through a single outlet, CaaS provides all content through a single outlet, streamlining content delivery and management.
For deeper understanding and comparison visit: CaaS vs SaaS
Ok, we got this clear, next question is: How to adopt Content as a Service for your business?
## How to get started with Content as a Service
Deciding to migrate to CaaS is only the first step. To be able to manage decoupled content, you still need a system to manage and deliver that content. Monolithic CMSs are trying to adapt to the CaaS model through add-ons and plugins, but they remain fundamentally webpage-centric platforms designed to control content presentation.
To go full CaaS, consider investing in a headless, API-first CMS. Such a system is specifically designed around decoupling content, enabling its versatile use across various contexts and maximizing its potential.
## Key considerations while evaluating a CaaS headless CMS
To choose the right CaaS model for your CMS should be a platform that can enable the following features and environment:
### APIs to fetch content
Implementing a CaaS infrastructure that includes open, standards-based REST APIs, GraphQL, and SDKs for a flexible metadata model simplifies the development process. This low-code approach helps developers efficiently address complex content needs.
### Language independence
A headless CMS should offer developers the freedom to build sites on any server in any programming language or framework. It should enable the distribution of content to unlimited sites or front-end environments, delivering it in formats such as JSON, RSS, custom templates, or XML.
### Personalize content
The headless CMS should support content management in various formats, such as text, audio, and video. It should deliver personalized UXs, like dynamic user-based layouts for campaign landing pages or microsites, for each channel or front-end screen/page.
### Content modeling
[Content modeling in a headless CMS](https://thebcms.com/blog/content-modeling-headless-cms) must provide features that support modular content creation, enabling reuse and repurposing of content across different channels.
### Composable architecture
A [composable architecture](https://thebcms.com/blog/composable-architecture-example) allows the CMS to integrate and interoperate with other services and tools, providing a modular approach to building and scaling digital experiences. This flexibility enables organizations to select and assemble various components, such as authentication, analytics, and marketing automation, to create websites or apps that meet their specific needs.
## Use BCMS for your CaaS migration
In the end, I'll leave you with one more consideration- try BCMS for CaaS.
Why? BCMS headless CMS enables you to have reusable content available across websites, apps, and other digital platforms.
By leveraging BCMS for CaaS, your organization can achieve faster content updates, improved UXs, and better scalability, all while reducing infrastructure costs. Embracing BCMS for CaaS truly makes CaaS the "Cloud 9" for content management.
And for the end, a little more haiku:
Whenever, wherever
Content needs to be together
Publish there, or publish here,
BCMS is the deal, my dear
There over, hereunder
You got me over all screens,
There's nothing left to fear
If you really wanna try, try BCMS for free.
Le-do-lo-le-lo-le, le-do-lo-le-lo-le. | momciloo |
|
1,912,842 | Event Listeners and Anchor Tag | Question: When does the anchor tag redirection happen given that multiple event listeners are... | 0 | 2024-07-05T14:14:46 | https://dev.to/pagarevijayy/event-listeners-and-anchor-tag-3pdo | javascript, webdev, beginners, tutorial | Question:
When does the anchor tag redirection happen given that multiple event listeners are attached to it?
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Event listeners and Anchor tag</title>
</head>
<body>
<a id="demo" onclick="inlineHandler(event)" href="https://www.google.com">link demo</a>
</body>
<script>
const inlineHandler = (e) => {
console.log('inline Handler');
setTimeout(()=> {
console.log('This will not get executed as redirection would have been occurred by then.');
}, 1000);
}
const link = document.getElementById('demo');
link.addEventListener('click', (e) => {
console.log('event listener 1');
})
link.addEventListener('click', (e) => {
console.log('event listener 2');
// e.preventDefault();
})
link.addEventListener('click', (e) => {
console.log('event listener 3');
})
</script>
</html>
```
![Output of the code when multiple event listeners are attached to an anchor tag](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/towqlwdxi0ixkwkl36a7.png)
**Learnings**
- Inline handler always gets executed first.
- Event listeners (for the same element) are handled in the order they are written.
- Anchor tag redirection happens after the execution of all event listeners.
- If any of the event listener triggers preventDefault (irrespective of the order) then redirection won't happen.
Use case: Trigger analytics event on anchor tag link click.
I'll leave the guesswork for async code upto you. For any doubts or discussions, please feel free to comment. Thanks! | pagarevijayy |
1,912,841 | Securing the Cloud #32 | Welcome to the 32nd edition of the Securing the Cloud Newsletter! In this issue, we dive into the... | 26,823 | 2024-07-05T14:13:16 | https://community.aws/content/2ipU77YZEcRPJHPm4ZtMFaxgHtn/securing-the-cloud-32 | security, career, learning, community | Welcome to the 32nd edition of the Securing the Cloud Newsletter! In this issue, we dive into the latest trends and insights in cloud security, explore career development opportunities, and share valuable learning resources. Additionally, we feature insightful perspectives from our community members.
## Technical Topic
* [How to securely transfer files with presigned URLs | AWS Security Blog](https://brandonjcarroll.com/links/9gza1) - Securely sharing large files and private data is critical in today's distributed work environments. This article explores how presigned URLs offer a powerful solution by enabling temporary, controlled access to Amazon S3 objects without exposing long-term credentials. It provides prescriptive guidance on best practices for generating and distributing presigned URLs securely, including implementing safeguards against inadvertent data exposure. The article goes into key technical considerations like using unique nonces, access restrictions, and serverless architectures for generating and validating one-time presigned URL access. It even offers a downloadable code sample illustrating how to implement these secure practices. It also emphasizes the importance of governance, continuous monitoring, and automated revocation procedures to maintain effective oversight and control when sharing presigned URLs broadly. By following the guidance outlined in this article, you can unlock the collaborative benefits of presigned URLs while protecting sensitive data. I encourage you to explore the full post to learn how to strike the right balance between secure data sharing and collaborative efficiency using this powerful architectural pattern.
## Career Corner
* [Guide to Becoming a Cloud Security Engineer: Roadmap (2024)](https://www.nwkings.com/cloud-security-engineer-roadmap) - As businesses adopt cloud computing, the role of cloud security engineers has become more important and more sought after. This guide digs into the exciting world of cloud security, exploring the responsibilities, skills, and career path. In the article you'll discover how cloud security engineers safeguard sensitive data and implement robust security measures to prevent breaches and cyber threats. You will also gain insights into the various types of cloud security attacks they combat, such as DDoS, hypervisor attacks, and malicious insiders. The article also explores earning potential, certifications, and has a roadmap. Yes, they are promoting a Cloud Security Master's Program that they sell, and I am not recommending you jump into that. But overall for someone that needs an overview and a roadmap, it's a start. And yes, I know, some of this you probably already know, but its good review! If you feel good in this area, just skip it!
## Learning and Education
Want to learn something new? Here you go!
* [Community | What is the Get AWS Certified: Data Engineer – Associate Challenge?](https://brandonjcarroll.com/links/qp6gk) - Sometimes you need to be challenged to make progress. If that's you, here's a challenge you might be interested in.
## Community Voice
A quick note before I get into this weeks share. The articles I share here are mostly posted by AWS Hero's and AWS Community Builders. With that said, I do my best not to do two things: 1\ Share posts from Medium because putting content behind a pay wall is not accessible to everyone and I don't want to encourage people to pay for another service. 2\ Drive traffic to LinkedIn. There is a TON of content there and lots of Hero's and Community Builders share their stuff there. If you want that content please follow them directly on Linkedin. You can find a directory of Hero's and Builders to follow [here]([https://aws.amazon.com/developer/community/community-builders/](https://aws.amazon.com/developer/community/community-builders/community-builders-directory/?cb-cards.sort-by=item.additionalFields.cbName&cb-cards.sort-order=asc&awsf.builder-category=*all&awsf.location=*all&awsf.year=*all)) and [here](https://aws.amazon.com/developer/community/heroes/?community-heroes-all.sort-by=item.additionalFields.sortPosition&community-heroes-all.sort-order=asc&awsf.filter-hero-category=*all&awsf.filter-location=*all&awsf.filter-year=*all&awsf.filter-activity=*all). If you'd like to contribute content to the newsletter, please reach out to me directly!
So, here is a roundup of a few posts from the community this week:
1. [AWS Managed KMS Keys and their Key Policies: Security Implications and Coverage for AWS Services](https://www.fogsecurity.io/blog/encryption-aws-managed-kms-keys) - Are you curious about the AWS Managed KMS Keys and their potential security implications? This blog post provides an insightful overview and introduces a handy tool from Fog Security that scans and lists all AWS Managed KMS Keys along with their corresponding key policies. With visibility into these keys being a challenge, the post highlights the importance of understanding their usage across various AWS services. It also discusses the pros and cons of using AWS Managed KMS Keys, encouraging readers to make informed decisions. The accompanying GitHub repository offers a comprehensive listing of AWS Managed KMS Keys and their key policies, regularly updated through an automated scanning process. Quick statistics and repository contents are also provided, giving you a glimpse into the valuable information available. If you're interested in cloud data security or have feedback on the tool, the author invites you to reach out to Fog Security. Don't miss the opportunity to explore this resource and gain insights into AWS Managed KMS Keys and their potential impact on your cloud environment.
2. [Setting up AWS IAM Identity Center as an identity provider for Confluence - DEV Community](https://dev.to/aws-builders/setting-up-aws-iam-identity-center-as-an-identity-provider-for-confluence-2l8) - This detailed guide walks you through setting up single sign-on (SSO) for the popular collaboration tool Confluence, using AWS IAM Identity Center. By integrating Confluence with AWS IAM Identity Center, you can centrally manage access for your users across multiple AWS accounts and Confluence itself. The step-by-step instructions cover everything from configuring the Confluence application in IAM Identity Center, to verifying domain ownership in Atlassian Admin, creating the identity provider, and enforcing SSO in Confluence's authentication policies. While the process involves several steps across AWS and Atlassian's interfaces, the guide provides clear directions and troubleshooting tips to ensure a smooth integration. If you're looking to streamline authentication and account management between your AWS environment and Confluence, this comprehensive walkthrough could save you a significant amount of time and effort. The ability to leverage AWS IAM Identity Center for SSO with third-party apps like Confluence also highlights its versatility as an identity provider solution.
That's it for this week. I encourage you to subscribe, share, and leave your comments on this edition of the newsletter.
Also, if you will be attending the AWS Summit New York, please let me know. I will be there as well and I am planning on doing some videos with community members. If videos aren't your thing, lets at least have a chat!
That's it for now!
Happy Labbing! | 8carroll |
1,912,838 | [Angular][pt] Desinscrições no Angular: Quando e por quê usar? | Introdução Nos meus 6 anos trabalhando com Angular, uma pergunta recorrente que vejo... | 0 | 2024-07-05T14:10:41 | https://dev.to/tatopetry/angularpt-desinscricoes-no-angular-quando-e-por-que-usar-109f |
## Introdução
Nos meus 6 anos trabalhando com Angular, uma pergunta recorrente que vejo aparecer, mesmo de devs com menos experiência, é: "Eu preciso me desinscrever aqui?"
A resposta é: sim, na maioria dos casos. Mas nem sempre é tão simples assim. Neste artigo, quero entender quando e como usar as desinscrições no Angular.
## O que é desinscrição e por que é importante?
Em resumo, desinscrever-se de um observable significa parar de receber notificações dele. Isso é crucial para evitar vazamentos de memória e garantir que seu aplicativo Angular funcione de forma eficiente.
Imagine que você se inscreve para receber uma revista semanal com ofertas de um supermercado local. Essa revista é muito útil enquanto você mora perto do mercado, pois te mantém atualizado com as melhores promoções. Porém, se você se muda para outra cidade onde esse mercado não existe, continuar recebendo essa revista se torna desnecessário e só lota sua caixa de correio com conteúdo que você não vai usar. Nesse caso, é melhor cancelar a inscrição para evitar receber informações irrelevantes.
No Angular, os observables funcionam de maneira semelhante. Quando você se inscreve em um observable, você está basicamente pedindo para receber notificações sempre que algo novo acontecer. Se você não precisa mais dessas notificações, é importante se desinscrever para evitar que o observable continue enviando notificações desnecessárias, o que pode levar a vazamentos de memória.
## Quando me desinscrever?
- **Sempre que você não precisar mais das notificações do observable.** Por exemplo, se você está buscando dados para uma tela e, em seguida, navega para outra tela, não precisa mais das notificações do observable da tela anterior.
- **Quando o componente que está criando a inscrição for destruído.** Isso geralmente acontece automaticamente quando você navega para outra tela, mas é importante verificar se há casos em que isso não acontece e, se necessário, desinscrever-se manualmente no `ngOnDestroy` do componente.
## Como me desinscrever?
Existem diversas maneiras de se desinscrever de um observable no Angular. Aqui estão alguns dos métodos mais comuns:
### 1. Usando o método `unsubscribe()`
Esse é o método mais básico e direto. Você pode armazenar a referência à inscrição em uma variável e chamá-la quando não precisar mais das notificações.
```typescript
const subscription = myObservable.subscribe(data => {
console.log(data);
});
// ... algum tempo depois ...
subscription.unsubscribe();
```
### 2. Usando o operador `takeUntil()`
Esse operador permite que você se desinscreva de um observable quando outro observable for concluído. Isso é útil em situações em que você deseja se desinscrever de um observable principal quando um observable secundário for concluído.
```typescript
import { Subject } from 'rxjs';
import { takeUntil } from 'rxjs/operators';
const subject = new Subject();
const subscription = myObservable.pipe(
takeUntil(subject)
).subscribe(data => {
console.log(data);
});
// ... algum tempo depois ...
subject.next();
```
### 3. Usando bibliotecas de terceiros
Existem diversas bibliotecas de terceiros que podem te ajudar a gerenciar desinscrições de forma mais fácil. Uma biblioteca popular é o `ngx-unsubscribe`.
## Desinscrição automática com bibliotecas nativas do Angular
Algumas bibliotecas nativas do Angular já fazem a desinscrição automaticamente. É o caso do `HttpClientModule` e do `Router`. Isso significa que você não precisa se preocupar em se desinscrever das solicitações HTTP ou das alterações de rota.
### HttpClientModule
O `HttpClientModule` desinscreve automaticamente das solicitações HTTP quando a requisição é concluída ou quando o componente que fez a requisição é destruído.
### Router
O `Router` desinscreve automaticamente das alterações de rota quando a navegação é concluída ou quando o componente que está escutando as alterações de rota é destruído.
### O async Pipe
O `async` pipe é uma maneira conveniente de lidar com observables no template do Angular. Ele automaticamente se inscreve no observable e se desinscreve quando o componente é destruído. Isso ajuda a evitar vazamentos de memória sem precisar escrever código de desinscrição manual.
#### Exemplo de uso do `async` pipe:
```html
<div *ngIf="authService.isLoggedIn$ | async as isLoggedIn">
<p *ngIf="isLoggedIn">Você está logado!</p>
<p *ngIf="!isLoggedIn">Você não está logado.</p>
</div>
```
Neste exemplo, `isLoggedIn$` é um observable exposto pelo serviço `AuthService`. O `async` pipe se inscreve nesse observable e atualiza automaticamente o template com base no seu valor atual.
## E os observables do Reactive Forms?
No caso dos observables do Reactive Forms, a desinscrição geralmente não é necessária. Isso porque esses observables são gerenciados internamente pelo Angular e são desinscritos automaticamente quando o componente é destruído.
## Exemplo Prático: Usando `valueChanges` e Desinscrição em um Componente de Login
Vamos ver um exemplo prático de como usar a desinscrição em um componente de login simples. No nosso exemplo, vamos usar o operador `takeUntil` para nos desinscrever quando o componente for destruído e também monitorar as mudanças de valor dos campos de entrada.
```typescript
import { Component, OnInit, OnDestroy } from '@angular/core';
import { FormGroup, FormControl, Validators } from '@angular/forms';
import { AuthService } from '../auth.service';
import { Subject } from 'rxjs';
import { takeUntil } from 'rxjs/operators';
import { Router } from '@angular/router';
@Component({
selector: 'app-login',
templateUrl: './login.component.html',
styleUrls: ['./login.component.css']
})
export class LoginComponent implements OnInit, OnDestroy {
loginForm: FormGroup;
private unsubscribeSubject = new Subject<void>();
constructor(private authService: AuthService, private router: Router) { }
ngOnInit(): void {
this.loginForm = new FormGroup({
email: new FormControl('', [Validators.required, Validators.email]),
password: new FormControl('', [Validators.required])
});
// Monitorar mudanças de valor usando valueChanges
this.loginForm.valueChanges
.pipe(takeUntil(this.unsubscribeSubject))
.subscribe(changes => {
console.log('Form value changed:', changes);
});
}
ngOnDestroy(): void {
this.unsubscribeSubject.next();
this.unsubscribeSubject.complete();
}
onSubmit(): void {
const { email, password } = this.loginForm.value;
this.authService.login(email, password)
.pipe(
takeUntil(this.unsubscribeSubject) // Desinscreve quando o unsubscribeSubject for emitido
)
.subscribe(
() => {
// Login efetuado com sucesso
this.router.navigate(['/']);
},
(error) => {
// Erro no login
console.error(error);
}
);
}
}
```
Neste exemplo, além de usar o operador `takeUntil` para desinscrever-se do observable de login quando o componente for destruído, também monitoramos as mudanças de valor do formulário usando `valueChanges`. Isso garante que as mudanças de valor sejam registradas enquanto o componente está ativo e paradas quando o componente é destruído.
## Resumo
Neste artigo, vimos que as desinscrições são cruciais para evitar vazamentos de memória e garantir que seu aplicativo Angular funcione de forma eficiente. Abordamos os seguintes tópicos:
1. **O que é desinscrição e por que é importante?**
2. **Quando me desinscrever?**
3. **Como me desinscrever?**
4. **Desinscrição automática com bibliotecas nativas do Angular**
5. **O `async` Pipe**
6. **E os observables do Reactive Forms?**
7. **Exemplo prático: Usando `valueChanges` e Desinscrição em um Componente de Login**
Espero que este artigo tenha te ajudado a entender melhor o conceito de desinscrições no Angular e como aplicá-lo em seus projetos.
### Dicas Extras:
- Sempre que você criar uma inscrição em um observable, pense em como você vai se desinscrever dela.
- Utilize ferramentas como o Angular DevTools para inspecionar as inscrições em seus componentes e identificar possíveis vazamentos de memória.
- Adote bibliotecas de terceiros como o `ngx-unsubscribe` para facilitar o gerenciamento de desinscrições em seus projetos.
Lembre-se: desinscrever-se é uma prática essencial para o desenvolvimento de aplicativos Angular robustos e eficientes. | tatopetry |
|
1,912,840 | Game Dev Digest — Issue #240 - Lighting, Animating, and more | Issue #240 - Lighting, Animating, and more This article was originally published on... | 4,330 | 2024-07-05T14:10:24 | https://gamedevdigest.com/digests/issue-240-lighting-animating-and-more.html | gamedev, unity3d, csharp, news | ---
title: Game Dev Digest — Issue #240 - Lighting, Animating, and more
published: true
date: 2024-07-05 14:10:24 UTC
tags: gamedev,unity,csharp,news
canonical_url: https://gamedevdigest.com/digests/issue-240-lighting-animating-and-more.html
series: Game Dev Digest - The Newsletter About Unity Game Dev
---
### Issue #240 - Lighting, Animating, and more
*This article was originally published on [GameDevDigest.com](https://gamedevdigest.com/digests/issue-240-lighting-animating-and-more.html)*
![Issue #240 - Lighting, Animating, and more](https://gamedevdigest.com/assets/social-posts/issue-240.png)
Hope you have a good game dev weekend. Enjoy!
---
[**An Algorithm for city map generation**](https://zero.re/worldsmith/roadnet/) - This article is only about street generation. However, this is the first part of a longer series that will explore several generation algorithms to create an entire city, down to individual apartments.
[_zero.re_](https://zero.re/worldsmith/roadnet/)
[**Designing A Game’s Flow [Introductory Guide]**](https://gamedesignskills.com/game-design/game-flow/) - Game Design / Designing A Game’s Flow [Introductory Guide]
[_gamedesignskills.com_](https://gamedesignskills.com/game-design/game-flow/)
[**I'm making a duck that moves with procedural animation and made breakdown of the current setup.**](https://x.com/robotgrapefruit/status/1808913118698918085) - Last weekend I posted a video of the duck I’m making with procedural animation, and the response was really encouraging! Some of you were asking how it’s made, so I thought I’d make a little breakdown of the setup!
[_Mike Sebele @robotgrapefruit_](https://x.com/robotgrapefruit/status/1808913118698918085)
[**Get a Free Huge PDF Report on the Outsourcing Market in 2024**](https://80.lv/articles/get-a-free-huge-pdf-report-on-the-outsourcing-market-in-2024/) - Is outsourcing a risky move or a winning strategy for game developers today?
[_80.lv_](https://80.lv/articles/get-a-free-huge-pdf-report-on-the-outsourcing-market-in-2024/)
[**Study finds co-op games keep growing in numbers (and sales) on Steam**](https://www.gamedeveloper.com/business/study-finds-co-op-games-keep-growing-in-numbers-and-sales-on-steam) - The power or two (or four). The amount of Steam games with co-op as a selling point grows on a yearly basis, and they're outshining their solo brethren.
[_gamedeveloper.com_](https://www.gamedeveloper.com/business/study-finds-co-op-games-keep-growing-in-numbers-and-sales-on-steam)
[**Achieve Faster CPU Rendering with Render Modes & Graphics Jobs**](https://thegamedev.guru/unity-cpu-performance/rendermodes-graphics-jobs/) - Know what Unity is doing behind your back with the game objects of your scene? You’d be surprised to see how rendering works under the hood.
[_thegamedev.guru_](https://thegamedev.guru/unity-cpu-performance/rendermodes-graphics-jobs/)
[**Programmers Should Never Trust Anyone, Not Even Themselves**](https://carbon-steel.github.io/jekyll/update/2024/06/19/abstractions.html) - It is folly to pursue certainty of your code’s correctness. A bug may be hiding in a dependency that you’ll never find. Yet we should not despair. We can still decrease the risk of bugs via greater understanding and due diligence.
[_carbon-steel.github.io_](https://carbon-steel.github.io/jekyll/update/2024/06/19/abstractions.html)
## Videos
[![Exploring a New Approach to Realistic Lighting: Radiance Cascades](https://gamedevdigest.com/assets/images/yt-3so7xdZHKxw.jpg)](https://www.youtube.com/watch?v=3so7xdZHKxw)
[**Exploring a New Approach to Realistic Lighting: Radiance Cascades**](https://www.youtube.com/watch?v=3so7xdZHKxw) - Radiance Cascades are an innovative solution to global illumination from the devs of Path of Exile 2. Let's explore and implement their approach.
[_SimonDev_](https://www.youtube.com/watch?v=3so7xdZHKxw)
[**I Tried Turning Games Into Text**](https://www.youtube.com/watch?v=gg40RWiaHRY) - ASCII art has been a staple of the internet since its inception -- but today I'm wondering if we could make a shader that turns games into ASCII art, and if we can, would it even look good?
[_Acerola_](https://www.youtube.com/watch?v=gg40RWiaHRY)
[**Thwips and Hugs: The Animation of 'Marvel's Spider-Man 2'**](https://www.youtube.com/watch?v=WMlEyfqVZLk) - In this 2024 Animation Summit session, join the esteemed team of Insomniac Animation Directors as they guide you on an immersive journey behind the scenes, offering a deeper exploration of the creative process behind Marvel's Spider-Man 2's animated world.
[_GDC_](https://www.youtube.com/watch?v=WMlEyfqVZLk)
[**Make the Trailer Before the Game: A Marketing First Way to Prototype**](https://www.youtube.com/watch?v=10YhD9HMsPA) - Why wait until the game is deep in production before you think of its first trailer? Thinking about what a trailer for your game will look like when you're at the prototyping phase can be an effective lens through which you can assess the appeal and viability of a game idea and show it to collaborators and business partners. Prolific game trailer editor Derek Lieu shares step-by-step instructions and best practices for conceiving game ideas via mockup trailers.
[_GDC_](https://www.youtube.com/watch?v=10YhD9HMsPA)
[**Making Connections: Real-Time Path-Traced Light Transport in Game Engines**](https://www.youtube.com/watch?v=lxRgmZTEBHM) - Path traced effects in games have been a dream of many since the early days of real-time rendering. The advent of hardware accelerated ray tracing and recent advances in algorithms derived from Reservoir-based Spatio-Temporal Importance Sampling (ReSTIR) have turned those dreams into reality.
[_GDC_](https://www.youtube.com/watch?v=lxRgmZTEBHM)
[**What sequels can teach us about Game Development**](https://www.youtube.com/watch?v=ruxYqT9iXPY) - I will note that the list I made is not necessarily "the truth", it is somewhat in the order of how I personally prioritise and think about these things, but I do think it could be equally valid to find other things more important.
[_Nonsensical 2D_](https://www.youtube.com/watch?v=ruxYqT9iXPY)
## Assets
[![SPECTACULAR EFFECTS SALE](https://gamedevdigest.com/assets/images/1720183964.png)](https://assetstore.unity.com/?on_sale=true&orderBy=1&rows=96&aid=1011l8NVc)
[**SPECTACULAR EFFECTS SALE**](https://assetstore.unity.com/?on_sale=true&orderBy=1&rows=96&aid=1011l8NVc) - 50% Off Sale. Create gaming magic with immersive VFX, particle systems, shaders, audio, and more at 50% off.
Including: [World Scan FX](https://assetstore.unity.com/packages/vfx/shaders/world-scan-fx-263296?aid=1011l8NVc), [Creative Lights](https://assetstore.unity.com/packages/vfx/particles/fire-explosions/creative-lights-282858?aid=1011l8NVc), [Toon Pro: Ultimate Stylized Shading](https://assetstore.unity.com/packages/vfx/shaders/toon-pro-ultimate-stylized-shading-225921?aid=1011l8NVc), [Advanced Edge Detection](https://assetstore.unity.com/packages/vfx/shaders/advanced-edge-detection-262863?aid=1011l8NVc), and more!
[_Unity_](https://assetstore.unity.com/?on_sale=true&orderBy=1&rows=96&aid=1011l8NVc) **Affiliate**
[**Unity and Unreal Engine Mega Bundle**](https://www.humblebundle.com/software/unreal-engine-and-unity-mega-bundle-software?partner=unity3dreport) - Limitless creation for Unity & Unreal
[__](https://www.humblebundle.com/software/unreal-engine-and-unity-mega-bundle-software?partner=unity3dreport)
[**InspectorTween**](https://github.com/RadialGames/InspectorTween?) - Tween system for unity mostly for setup in inspector instead of code.
[_RadialGames_](https://github.com/RadialGames/InspectorTween?) *Open Source*
[**AnKuchen**](https://github.com/kyubuns/AnKuchen?) - Control UI Prefab from Script Library
[_kyubuns_](https://github.com/kyubuns/AnKuchen?) *Open Source*
[**TextureSource**](https://github.com/asus4/TextureSource?) - Virtual Texture Source for Unity (WebCam, Video, AR Camera)
[_asus4_](https://github.com/asus4/TextureSource?) *Open Source*
[**com.bananaparty.websocketclient**](https://github.com/forcepusher/com.bananaparty.websocketclient?) - Fully cross-platform WebSocket client library.
[_forcepusher_](https://github.com/forcepusher/com.bananaparty.websocketclient?) *Open Source*
[**com.bananaparty.behaviortree**](https://github.com/forcepusher/com.bananaparty.behaviortree?) - Fully cross-platform Behavior Tree.
[_forcepusher_](https://github.com/forcepusher/com.bananaparty.behaviortree?) *Open Source*
[**figma-ui-image**](https://github.com/Volorf/figma-ui-image?) - Figma UI Image. A package to bring a Figma Design to Unity as a UI Image.
[_Volorf_](https://github.com/Volorf/figma-ui-image?) *Open Source*
[**PrefsUGUI**](https://github.com/a3geek/PrefsUGUI?) - Auto creation GUI elements by doing variable declaration.
[_a3geek_](https://github.com/a3geek/PrefsUGUI?) *Open Source*
[**MGS.Animation**](https://github.com/mogoson/MGS.Animation?) - Unity plugin for path animations in scene.
[_mogoson_](https://github.com/mogoson/MGS.Animation?) *Open Source*
[**Deform**](https://github.com/keenanwoodall/Deform?) - A fully-featured deformer system for Unity that lets you stack effects to animate models in real-time
[_keenanwoodall_](https://github.com/keenanwoodall/Deform?) *Open Source*
[**Unity-Editor-History**](https://github.com/BedtimeDigitalGames/Unity-Editor-History?) - View and navigate your selection history!
[_BedtimeDigitalGames_](https://github.com/BedtimeDigitalGames/Unity-Editor-History?) *Open Source*
[**Simple-ActiveRagdoll**](https://github.com/davidkimighty/Simple-ActiveRagdoll?) - Simple implimentation of active-ragdoll in Unity.
[_davidkimighty_](https://github.com/davidkimighty/Simple-ActiveRagdoll?) *Open Source*
[**AssetHunts Studio! Open Game Asset Library**](https://assethunts.itch.io/) - Everything Free & CC0 Licensed. We're releasing New Asset Pack regularly!
[_assethunts.itch.io_](https://assethunts.itch.io/)
[**50% off ricimi - Publisher Sale**](https://assetstore.unity.com/publisher-sale?aid=1011l8NVc) - Ricimi is committed to crafting high-quality game UIs and art assets that help developers kickstart their own projects. Save time thanks to these unique and professional visual designs. PLUS, get [Sweet Cakes Icon Pack](https://assetstore.unity.com/packages/package/id/88182?aid=1011l8NVc) for FREE with code RICIMI
[_Unity_](https://assetstore.unity.com/publisher-sale?aid=1011l8NVc) **Affiliate**
[**Audio Arcade - The Definitive Collection Of Music And Sound From Ovani Sound**](https://www.humblebundle.com/software/audio-arcade-definitive-collection-music-and-sound-fx-from-ovani-sound-software?partner=unity3dreport) - Expand your audio arsenal. Give your project that last bit of audio polish it needs to truly shine with this bundle from Ovani Sound. You’ll get a vast collection of royalty-free music and sound FX ready to plug into your project, as well as powerful time-saving audio plugins usable on all major game engines. From masterfully crafted gunshots and explosions and music packs that span moods and genres, to music and ambiance plugins for Godot, Unity, and Unreal Engine, your audio needs will be well and fully sorted. Plus, your purchase helps the Children’s Miracle Network.
[_Humble Bundle_](https://www.humblebundle.com/software/audio-arcade-definitive-collection-music-and-sound-fx-from-ovani-sound-software?partner=unity3dreport) **Affiliate**
## Spotlight
[![Atlas Wars](https://gamedevdigest.com/assets/images/yt-v9g1EWU_Z1A.jpg)](https://store.steampowered.com/app/2623340/Atlas_Wars/)
[**Atlas Wars**](https://store.steampowered.com/app/2623340/Atlas_Wars/) - Join the epic world of Atlas Wars, a fast-paced skill-based multiplayer brawler! Choose unique heroes, master awesome abilities, and battle in crazy and unique arenas. Atlas Wars offers an action-packed brawl experience!
_[You can wishlist it on [Steam](https://store.steampowered.com/app/2623340/Atlas_Wars/) and visit their [website](https://atlaswars.com/)]_
[_Stone Shard Games, Nano Knight Studio_](https://store.steampowered.com/app/2623340/Atlas_Wars/)
---
[![Call Of Dookie](https://gamedevdigest.com/assets/images/1705068448.png)](https://store.steampowered.com/app/2623680/Call_Of_Dookie/)
My game, Call Of Dookie. [Demo available on Steam](https://store.steampowered.com/app/2623680/Call_Of_Dookie/)
---
You can subscribe to the free weekly newsletter on [GameDevDigest.com](https://gamedevdigest.com)
This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.
| gamedevdigest |
1,912,837 | Why Enterprise Cloud Solutions Are Crucial for Business Growth | Businesses are starting to use cloud technology in today's dynamic environment in order to remain... | 0 | 2024-07-05T14:07:40 | https://dev.to/rickyymartinn001/why-enterprise-cloud-solutions-are-crucial-for-business-growth-14d6 | cloudsolutions, cloudtechnology, itsolutions, hatchtechs | Businesses are starting to use cloud technology in today's dynamic environment in order to remain ahead of the curve and spur growth. This is because these solutions are a great option for companies trying to survive in the competitive market since they provide a high degree of security, scalability, and cost savings.
We'll look at why [enterprise cloud solutions](https://hatchtechs.com/it-solutions/cloud-development) are essential for company expansion and how they may completely change the way businesses operate in this blog post.
**1. Scalability and Adaptability**
Scalability is one of the key features of business cloud systems. For small firms, conventional IT solutions can be a significant obstacle because they often need large investments and time to scale up or down. Conversely, cloud solutions enable businesses to quickly and easily adapt their resources to the changing needs of the market. This flexibility ensures that companies won't have downtime or performance problems while responding to shifting consumer needs, seasonal variations, or unplanned development surges.
**2. Improved Data Security**
Within the digital era, data security may be a major ordeal for businesses. Secure components are integrated into cloud solutions to safeguard private data from cyber threats. High-end security measures like firewalls, multi-factor verification, and encryption are significantly helped by cloud providers.
**3. Disaster Recovery and Business Continuity**
Business operations can be significantly affected by natural disasters, or other unexpected tragedies. A great option for business growth and disaster recovery is given by utilizing enterprise cloud solutions. Cloud service providers frequently give disaster recovery and data backup services, ensuring that imperative corporate information is ensured and can be rapidly reestablished in the case of an intrusion. This capacity helps companies manage operations and diminish downtime, which is basic for extension and client satisfaction.
**4. Environmental Sustainability**
As businesses ended up more eco-friendly, undertaking cloud solutions offered an economical alternative to customary IT solutions. Cloud data centers are planned to be energy-efficient and diminish carbon footprint. Businesses can demonstrate their commitment to environmental sustainability by migrating to the cloud. This could propel a company's reputation and attract environmentally conscious customers and partners.
**5. Quicker Time to Market**
In a competitive business landscape, speed is pivotal. Cloud solutions empower businesses to send modern applications, services, and products more rapidly than with conventional solutions. The agility provided by the cloud permits companies to respond to market demands and customer requirements quickly, giving them a competitive edge. This agility can be a critical figure in driving business growth.
**6. Global Reach**
For businesses looking to increase their reach, enterprise cloud solutions give the foundation required to facilitate global operations. Cloud platforms are accessible around the world, permitting companies to scale their services and penetrate new markets without the requirement for extensive physical infrastructure. This worldwide reach can open up new doors for growth and revenue.
**Conclusion**
Cloud solutions are not just technological trends; they are a key requirement for businesses aiming for growth and scalability. The adaptability, cost efficiency, improved collaboration, and enhanced security provided by cloud solutions make them irreplaceable tools for driving business growth. By adopting enterprise cloud solutions, companies can reach a high level of efficiency and growth and gain a competitive advantage, stand out from their competition, and thrive in the digital world.
| rickyymartinn001 |
1,911,972 | Transforming Titans: A Novel Journey of Agile Leadership in Outsourcing | Chapter 1: The New Beginning Paul Keen stood in his new office, his mind racing with... | 0 | 2024-07-05T14:07:30 | https://jetthoughts.com/blog/transforming-titans-outsourcing-odyssey-leadership-agile/ | leadership, agile, management, productivity | ### **Chapter 1: The New Beginning**
[Paul Keen](https://www.linkedin.com/in/paul-keen/) stood in his new office, his mind racing with possibilities and challenges. He had just been hired as the CTO of a company known for its outstaffing model, where they provided clients with skilled tech personnel. However, Paul's mission was clear: transform this company into a thriving outsourcing service provider.
The CEO had been clear. The market was changing, and they needed to adapt. Clients wanted complete solutions, not just people. The task seemed daunting, but Paul was ready. He knew this journey would be challenging. There would be obstacles, and it would be tough without help. He thought of "The Phoenix Project," a book he once read about turning around a failing IT organization. It was time to channel that same energy.
Paul's first step was to understand the company's current state. He called it "Operation Insight."
This initial phase set the stage for a comprehensive transformation, where understanding the company's strengths and weaknesses would be crucial for the next steps.
### **Chapter 2: The First Steps**
![old compass greek mythology hyperrealistic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgjwkbnqyznbzwlxuqv7.png)
Paul analyzed SWOT to identify strengths, weaknesses, opportunities, and threats. Strengths? The company had great talent. Weaknesses? They needed more processes for delivering complete projects. Opportunities? There is a vast market for outsourcing services. Threats? Competitors were already doing it.
In addition to SWOT, Paul developed a detailed Stakeholder Mapping and Engagement Plan. This plan identified key stakeholders, their influence, and their concerns. Engaging these stakeholders early would often be critical to the success of the transformation.
- **CEO Alina**: Strong supporter, emphasized the need for complete solutions to stay competitive.
- **CFO Mark**: Cautious and concerned about the financial implications of the transformation.
- **Head of Sales Dima**: Optimistic, saw potential for increased revenue and market share.
- **HR Head Sarah**: Skeptical, worried about the impact on staff and the need for retraining.
- **CMO Erik**: Enthusiastic and eager to leverage marketing to showcase the new services.
- **The Board**: Mixed views, but generally supportive if financial stability could be maintained.
Paul came up with a vision. The company needed to offer complete outsourcing solutions. They would handle entire projects, not just provide people. After forming a clear vision for complete outsourcing solutions, Paul knew the next critical step was to rally the whole company behind this transformation. He initiated town hall meetings to create a sense of urgency. Paul held town hall meetings to explain why the change was critical. He painted a vivid picture of a future where the company led the market in outsourcing services, instilling a sense of urgency and the need for immediate action in the audience.
With the groundwork laid, Paul was ready to form a dedicated team to drive the transformation forward.
### **Chapter 3: The Vanguard**
![ship with warriors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucdba56scdgjodpm7wzl.png)
Next, he formed a powerful coalition — The Vanguard. This team comprised critical stakeholders from different departments and was crucial to the transformation. Using the stakeholder map, Paul carefully selected members who could drive change and influence others. Some, like Michael Vas, a veteran project manager with extensive experience, were on board. His knowledge and insights were invaluable, and he quickly became Paul's trusted lieutenant.
Michael suggested addressing Sarah's skepticism directly. "We need her on our side, Paul. Let's sit down with her and address her concerns."
Paul agreed and arranged a meeting with Sarah. "Sarah, I understand your concerns about the impact on staff. Let's work together to ensure everyone is supported," Paul began.
Sarah sighed, "It's just that this change is massive, and I'm worried about our employees' morale and their ability to adapt."
"We can develop comprehensive training programs and provide continuous support," Paul assured her. "Your expertise in HR will be vital in this process."
After a lengthy discussion, Sarah nodded, "Alright, I see the potential benefits. I'll help manage the staff transition and ensure we communicate effectively."
Sarah's expertise in HR became a vital asset to the Vanguard, helping to manage staff transitions and morale. With her support, the team was better equipped to handle the challenges ahead.
Erik, the CMO, also brought valuable insights. "We need to market these changes effectively to our clients. Let's ensure our messaging highlights the benefits of our new services," he said.
The formation of the Vanguard marked the beginning of active efforts towards implementing the transformation strategy.
### **Chapter 4: The Trials and Tribulations**
![The great tribulation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mu9qo09gs6h9ci4y1x0a.png)
Recognizing the need for quick wins to build momentum, Paul and his team launched pilot projects. These were carefully chosen to test their new processes and demonstrate early successes.
One afternoon, Paul and Michael sat down to review the pilot project's results. "We're missing deadlines, and costs are ballooning," Paul noted, frustration evident in his voice.
Michael nodded, "We need to be more agile. Let's incorporate Lean Startup principles – iterate quickly and learn from each step."
"Agreed," Paul replied, "Let's test small and learn fast."
They implemented these changes, and the next project saw marked improvements.
The evolution of their projects was strategic. They started with the smallest pivot projects, ensuring quick feedback and minimal risk. Once they fine-tuned their processes, they moved on to one or two medium-sized projects. Success in these gave them the confidence to scale their operations, handling many small projects simultaneously before finally tackling a significant enterprise project.
The road, however, was bumpy. Resistance came from unexpected quarters. The finance team, led by Robert, was a significant hurdle. Robert is stuck in old ways and questions every expense and new hire.
In a tense meeting, Robert challenged the budget again. "Why do we need so many new hires?" he demanded.
Paul remained calm. "Robert, these investments will pay off in the long run. We need to think about scalability."
Michael backed him up, "We have data to support these decisions. Our projections show significant returns."
Paul had to convince him that investing now would pay off later. This involved presenting detailed projections and demonstrating the long-term benefits of the new model.
The challenges faced during these trials laid the groundwork for the following successes.
### **Chapter 5: Gaining momentum**
![pulp fiction style, showing a Latin man blacksmith forging a sword](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wvlzrureydpeio3azu3g.png)
By the end of the first year, they were gaining momentum. The second round of pilot projects was smoother. They delivered on time and within budget. Clients were happy. Paul could see the vision coming to life.
He knew they needed a solid technology foundation, so they adopted new tools like project management software, collaboration platforms, and automation tools. Security was a top priority, and they ensured compliance with industry standards.
Training was critical. Michael and Sarah proposed launching an extensive training program using the ADKAR model: Awareness, Desire, Knowledge, Ability, and Reinforcement. Employees learned new skills and adapted to new roles.
Michael emphasized in a meeting, "We need to stay ahead of the curve, or we'll be left behind."
Sarah added, "Our people are our greatest asset. Investing in their growth is investing in our future."
Paul nodded, appreciating their initiative, "Let's make sure the training program is comprehensive and ongoing."
With a solid foundation and a motivated team, they were ready to tackle larger challenges and scale their operations.
### **Chapter 6: The Turning Point**
![Ancient Greek warrior](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht2nsdah3pxaex7pptcw.png)
Year two was pivotal. They were ready to scale. The Vanguard worked tirelessly. They expanded their service offerings. More clients signed up for complete outsourcing solutions. The company rebranded itself as an outsourcing leader. Paul's vision was becoming a reality.
They held brainstorming sessions to innovate. Ideas flowed, and creativity soared. They used Design Thinking to solve complex problems. In one such session, Michael suggested involving clients more directly. "Their input can guide us to tailor solutions perfectly," he said. This client-centric approach boosted satisfaction and loyalty.
Paul shared yearly reports of progress and performance, ensuring everyone was informed and involved and building trust and transparency. He learned to navigate the corporate landscape, balancing different interests.
The board was happy with the progress but still had concerns about the plans for the next two years and potential risks. "We need to ensure our strategies are robust enough to handle unforeseen challenges," one board member remarked during a review meeting.
### **Chapter 7: Overcoming Setbacks**
![warriors fight with a gigantic spider](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmn6xg9hlsl60gvngpeg.png)
While some things went smoothly, a significant project encountered a problem. Deadlines were missed, and costs exceeded the budget. As a result, the client went mad.
Nonetheless, the Vanguard acted quickly and decisively. They organized a crisis meeting and used the '5 Whys' technique to identify the root causes of the issues. After addressing the problems, they reassured the client, demonstrating their commitment to the project and their ability to handle challenges effectively.
Paul was particularly impressed with Sarah's performance during this crisis. Her leadership and dedication to managing staff morale and transitions were exceptional. This challenging experience strengthened their determination and inspired them to persevere, showcasing their resilience to the audience.
### **Chapter 8: The Final Stretch**
![a giant warrior, and a warrior watching him](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lze4tdlaysv4y2olqytw.png)
By the third year, they were a well-oiled machine. The company had transformed. They delivered projects seamlessly, delighted clients, and stayed ahead of the competition. They were a market leader in outsourcing services.
Paul's journey had been challenging. There were mistakes and setbacks but also victories and lessons learned. The Vanguard was not just a team but a close-knit unit that stood together in the face of change. Michael Vas was not just a colleague but his trusted lieutenant. Sarah's contributions were equally invaluable; her leadership in managing staff and morale played a crucial role. Together, they had turned the vision into reality, showcasing the power of unity in driving change.
The board and CEO Alina reviewed the results and expressed their appreciation. "You've achieved 87% of our ambitious goals, which is outstanding," Alina remarked. "We've uncovered many new potentials and opportunities, surpassing our expectations."
They celebrated their success, but Paul knew the journey wasn't over. They had to keep innovating and adapting, always striving for better. This commitment to progress was a crucial part of their success and a testament to their dedication to excellence.
### **Chapter 9: The Lessons Learned**
![The Dwarf Kings' Causeway is an old road that was once used to link the 4 dwarf city-states. It used to lead to the High Marches, where one of the four dwarf city-states fell during the war. This city acted as a buffer between the lands of the Empire and the elves before the elves submitted to the imperial rule and became a protectorate. As the city-state fell, there was very little dwarf presence today and little imperial troop presence. Dnd old road style](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4z7xfcdxorh4tetey8nc.png)
Paul gathered the entire Vanguard team for a final reflection on their journey. He began, "We've come a long way from where we started. What have we learned?"
Michael was the first to speak, "Agility is crucial. Lean Startup principles kept us on track."
Sarah added, "Investing in our people has paid off. The ADKAR model was instrumental."
Paul nodded, "Exactly. Kotter's 8-Step Change Model provided structure, while Design Thinking fueled our innovation."
Erik said, "Marketing the transformation effectively was key to gaining client trust."
"Absolutely," Paul agreed, "And Kaizen kept us agile. Our tools and frameworks were essential, but our people made it happen."
"Teamwork and clear communication were our biggest assets," Dima commented, reinforcing the sense of unity.
Paul concluded, "Let's carry these lessons forward. The market will keep evolving, but with our vision and dedication, we're ready for any challenge."
The team exchanged nods and smiles, feeling a deep sense of accomplishment and readiness for the future.
### **Chapter 10: The Future**
![A figure in ancient warrior attire stands on a hill overlooking a vast, scenic landscape with mountains, a distant city, and classical architecture at sunset.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ic968qvhx0q4a0vx84ln.png)
Paul looked ahead. The company was strong, but it couldn't rest on its laurels. The market would keep evolving. It had to stay ahead and keep delivering value. He knew it could do it. It had the vision, the team, and the tools.
Paul smiled. The journey had been worth it. They had transformed, but the adventure was just beginning. They were ready for whatever came next.
### **References**
- **Kotter’s 8-Step Change Model**: [Read More](https://www.kotterinc.com/8-step-process-for-leading-change/)
- **Lean Startup**: [Read More](https://leanstartup.co/)
- **PDCA Cycle**: [Read More](https://asq.org/quality-resources/pdca-cycle)
- **SWOT Analysis**: [Read More](https://www.mindtools.com/pages/article/newTMC_05.htm)
- **ADKAR Model**: [Read More](https://www.prosci.com/methodology/adkar)
- **Stakeholder Mapping**: [Read More](https://www.mindtools.com/pages/article/newPPM_07.htm)
- **Impact Mapping**: [Read More](https://www.impactmapping.org/)
- **Design Thinking**: [Read More](https://www.ideou.com/pages/design-thinking)
- **Kaizen**: [Read More](https://www.kaizen.com/what-is-kaizen.html)
This was Paul Keen's story. Yours might be different. But the principles remain the same. Embrace change, lead with vision, and never stop improving. Your transformation journey awaits. | jetthoughts_61 |