Cybersecurity Alerts, News, and Tips

Estonia ‘s police arrested a Tallin resident who stole 286K ID scans from a government DB

Estonia ‘s police arrested a man from Tallinn that is suspected to be the hacker who stole 286K ID scans from the government systems. Estonian police arrested a man from Tallinn that is suspected to have stolen 286,438 belonging to Estonians citizens from the government systems. The hacker exploited a vulnerability in a photo transfer […]
The post Estonia ‘s police arrested a Tallin resident who stole 286K ID scans from a government DB appeared first on Security Affairs.


Meteor was the wiper used against Iran’s national railway system

The recent attack against Iran’s national railway system was caused by a wiper malware dubbed Meteor and not by a ransomware as initially thought. According to research from Amnpardaz and SentinelOne, the recent attack against Iran’s national railway system was caused by a wiper malware dubbed Meteor and not by ransomware as initially thought. Meteor was a previously undetected strain of malware, but experts […]
The post Meteor was the wiper used against Iran’s national railway system appeared first on Security Affairs.


The Engineering Executive Operating System – Part 2

Today on Dev Interrupted, we premiere the second and final episode of our two-part series with Tech Executive Consultant and author, Aviv Ben-Yosef.
If you haven’t listened to the first episode, I highly recommend checking it out. Aviv is an expert at onboarding new employees, particularly those joining leadership positions.
In episode 2 we discuss how leadership can provide a workplace culture where employees are encouraged to speak up and question leadership decisions as well as how to create high-impact R&D organizations.
Get a sample chapter of Aviv’s book here.

Part 2 Highlights Include:

Each member of a team becoming a force multiplier
“Chutzpah driven development”
How to have candid discussions between leadership and employees
Providing a culture where employees can tell leadership they are wrong
Creating high-impact R&D organizations and teams

[embedded content]
Join the Dev Interrupted Server
With over 1400 members, the Dev Interrupted Discord Community is the best place for Engineering Leaders to engage in daily conversation. No sales people allowed. Join the community > >


Windows 10 KB5004296 Cumulative Update released with gaming fixes

Microsoft has released the optional KB5004296 Preview cumulative update for Windows 10 2004, Windows 10 20H2, and Windows 10 21H1. This update contains fixes for gaming issues experienced by Windows 10 users since March.
This cumulative update is part of Microsoft’s July 2021 monthly “C” update, allowing users to test the upcoming fixes scheduled for next month’s August 2021 Patch Tuesday.
Unlike Patch Tuesday cumulative updates, this preview update only contains bug fixes, performance enhancements, and improvements to existing features and does not include any security updates.
Windows users can install this update by going into Settings, clicking on Windows Update, and selecting ‘Check for Updates.’
As this is an optional update, you will be prompted to click on the download and install button before Windows 10 will install the update, as shown below.

Windows Update offering the optional KB5004296 update
Windows 10 users can also manually download and install the KB5004296 preview update from the Microsoft Update Catalog.
Gaming fixes included
Since March, Windows 10 users have complained about performance issues, stuttering, and low frame rates when playing games.
It was so widely reported at the time that NVIDIA recommended users should roll back to older Windows 10 updates to resolve the gaming issues.
In April, Microsoft released a Known Issue Rollback (KIR) to Windows 10 users to resolve the gaming issues, but the problems persisted for many users.
Today, as part of the KB5004296 update, Microsoft has now fixed an issue that prevented power plans and game mode from working, caused lower FPS, and reduced performance for gamers.
“Addresses an issue that prevents power plans and Game Mode from working as expected. This results in lower frame rates and reduced performance while gaming,” the company noted in the changelog.
What’s new in Windows 10 KB5004296
After installing this update, Windows 10 2004 will be updated to build 19041.1151, Windows 10 20H2 will be updated to build 19042.1151, and Windows 10 21H1 will be updated to build 19043.1151.
The Windows 10 KB5004296 cumulative update preview includes multiple improvements or fixes, with the highlighted ones listed below:

Updates an issue that prevents gaming services from opening certain games for desktop users.

Updates an issue that prevents you from entering text using the Input Method Editor (IME). This might occur, for example, after startup if you have set the power options to shut down a laptop by closing its lid.

Updates an issue that plays the sound for selecting something in a game loudly when you press the trigger button on a game controller. 

Updates an issue that prevents power plans and Game Mode from working as expected. This results in lower frame rates and reduced performance while gaming.

Updates an issue that fails to detect that you are connected to the internet after you connect to a virtual private network (VPN). 

Updates an issue that causes printing to stop or prints the wrong output. This issue occurs when you print using a USB connection after updating to Windows 10, version 2004 or later.

A full list of fixes can be found in the KB5004293 support bulletin.
Microsoft also released today the KB5004293 cumulative update preview for Windows 10 1909.  Installing the Windows 10 1909 update will increase the build number to 18363.1714.
The full list of fixes in KB5004293 can be found in the associated support bulletin.


Windows 10 gaming issues fixed in KB5004296 — How to download

Microsoft has released the optional KB5004296 Preview cumulative update for Windows 10 2004, Windows 10 20H2, and Windows 10 21H1. This update fixes Windows 10 gaming issues that have been plaguing users since March.
This cumulative update is part of Microsoft’s July 2021 monthly “C” update, allowing users to test the upcoming fixes scheduled for next month’s August 2021 Patch Tuesday.
Unlike Patch Tuesday cumulative updates, this preview update only contains bug fixes, performance enhancements, and improvements to existing features and does not include any security updates.
Windows users can install this update by going into Settings, clicking on Windows Update, and selecting ‘Check for Updates.’
As this is an optional update, you will be prompted to click on the download and install button before Windows 10 will install the update, as shown below.

Windows Update offering the optional KB5004296 update
Windows 10 users can also manually download and install the KB5004296 preview update from the Microsoft Update Catalog.
Gaming fixes included
Since March, Windows 10 users have complained about performance issues, stuttering, and low frame rates when playing games.
It was so widely reported at the time that NVIDIA recommended users should roll back to older Windows 10 updates to resolve the gaming issues.
In April, Microsoft released a Known Issue Rollback (KIR) to Windows 10 users to resolve the gaming issues, but the problems persisted for many users.
Today, as part of the KB5004296 update, Microsoft has now fixed an issue that prevented power plans and game mode from working, caused lower FPS, and reduced performance for gamers.
“Addresses an issue that prevents power plans and Game Mode from working as expected. This results in lower frame rates and reduced performance while gaming,” the company noted in the changelog.
What’s new in Windows 10 KB5004296
After installing this update, Windows 10 2004 will be updated to build 19041.1151, Windows 10 20H2 will be updated to build 19042.1151, and Windows 10 21H1 will be updated to build 19043.1151.
The Windows 10 KB5004296 cumulative update preview includes multiple improvements or fixes, with the highlighted ones listed below:

Updates an issue that prevents gaming services from opening certain games for desktop users.

Updates an issue that prevents you from entering text using the Input Method Editor (IME). This might occur, for example, after startup if you have set the power options to shut down a laptop by closing its lid.

Updates an issue that plays the sound for selecting something in a game loudly when you press the trigger button on a game controller. 

Updates an issue that prevents power plans and Game Mode from working as expected. This results in lower frame rates and reduced performance while gaming.

Updates an issue that fails to detect that you are connected to the internet after you connect to a virtual private network (VPN). 

Updates an issue that causes printing to stop or prints the wrong output. This issue occurs when you print using a USB connection after updating to Windows 10, version 2004 or later.

A full list of fixes can be found in the KB5004293 support bulletin.
Microsoft also released today the KB5004293 cumulative update preview for Windows 10 1909.  Installing the Windows 10 1909 update will increase the build number to 18363.1714.
The full list of fixes in KB5004293 can be found in the associated support bulletin.


How to Maintain Cybersecurity in the Remote Work Era

While adapting to the current global crisis, businesses had to rapidly transition to a remote workforce to meet customer demands through digital channels. However, this sudden transformation to remote operations presented a whole new challenge of cyber risks. While working remotely, employees may be using home computers to log in to company networks and access confidential information, relying on home internet networks instead of the company’s secured network. For remote workers, the importance of cloud infrastructure has increased. They may also be using third-party apps to stay connected with others. And since a work-from-home setup, unlike an office environment, may be slightly more relaxed when it comes to data security, the users are not vigilant enough for cyber risk.Cybersecurity Risk in a Remote Work Setup

Using unsecured endpoint devices: This makes the system more vulnerable to cyber-attacks.
Personal use of laptops: Employees are using office laptops for personal use like opening personal emails, shopping sites, or any social sites which may put company data at stake.
Lack of physical security: This could take place by leaving the laptop open or leaving the device in an open car. 
Phishing attack: There are more chances of phishing attacks while working remotely as compared to work in an organizational environment.
File sharing: Many companies encrypt their files while storing or transferring in their network but that’s not possible for remote employees.

How to Protect Against a Security Threat While Working Remotely
When it comes to protection against security threats while working remotely, there is no one-size-fits-all approach. There are lots of measures organizations need to take to secure their employees. Let’s look more closely at the measures that should be taken to secure remote employees from cyber threats:

Offer cybersecurity training: According to a report, organizations are at more risk for cyber threats because of remote employees. So, education is the tool that empowers them to work securely.  As a result, it is more important that there should be training sessions for employees to raise awareness regarding the latest cyber threat. This training should ensure that employees are taking appropriate steps to avoid any cyber risk. Employees should know how to identify the suspicious activity on the device and what primary steps should be taken before raising a ticket to the company’s security department.
Use VPN connections: As compared to office networks, home networks are more easy targets for attackers. If your employees are working remotely, organizations should ensure that employees must have VPN while accessing company email, files, and other systems. VPN allows you to work freely from home without having any threat of cyber attack. It helps to protect data and keeps the connection private and secure. 
Use multi-factor authentication: Multi-factor authentication (MFA) is one of the most important ways to boost your security. According to a report, MFA prevents 99.9% of cyber attacks. Enabling multi-factor authentication stops hackers from logging into your accounts in case the password has been compromised. It could help in reducing an organization’s vulnerability. 
Secure the endpoints: Unsecured endpoints are the easiest targets of attack. Around 70% of breaches originate from unsecured endpoints. To secure the endpoint, employees should not connect to any public network. They should use reliable and trusted security tools like malware scanners, firewalls, etc. They need to keep their devices patched and updated.
Create advanced passwords: As cyber threats are increasing every day, it’s important to maintain password hygiene. It’s an efficient way to get protected from attackers. Numerous reports highlight the fact that many users using weak passwords — like phrases that personally relate to them, birthdays, or any simple combination — are more vulnerable to attack. They should use a strong combination of passwords, like combinations of lowercase, uppercase, punctuation, numbers, and special characters. To enhance the password layer, companies should require a periodic password change.
Create data backups: Data backup is an essential part of running a business. From cyberattacks to human error, many cases can cause data loss. Data backup gives you additional security in case you are not able to access your data. A reliable cloud application for data backup can save your business. 
Using firewall: Firewall is an intelligence premier defense. It will provide you intrusion protection, malware blocking, and application control. It also offers secure remote access so that employees can access files safely.

Conclusion
The last year changed our way of living from personal to professional. During this period, several organizations faced cyber attacks. Having a remote workforce increases the financial burden on companies’ cybersecurity infrastructure. Since remote working might be a permanent mode of operation given the circumstances, companies need to assess security risks on priority. By taking preventive measures and being aware of possible risks, companies can reduce cyber threats significantly.


Estonia arrests hacker who stole 286K ID scans from govt database

Image: Stanislav Rabunski
A Tallinn man was arrested a week ago in Estonia under suspicion that he has exploited a government photo transfer service vulnerability to download ID scans of 286,438 Estonians from the Identity Documents Database (KMAIS).
The attacker was apprehended on July 23, following a Cybercrime Bureau of the National Criminal Police and RIA joint investigation that started after RIA was alerted of a higher than the usual number of queries.
“During the searches, investigators found the downloaded photos from a database in the person’s possession, along with the names and personal identification codes of the people,” Oskar Gross, head of the police’s cybercrime unit, said.
“Currently, we have no reason to believe that the suspect would have used or transmitted this data maliciously, but we will further clarify the possible motives for the act in the course of the proceedings.”
Stolen info cannot be used for fraud
The suspect downloaded the government document photos using the targets’ names and personal ID codes (available from various public databases).
RIA added that the stolen information could not be used to perform notarial or financial transactions or gain access to state digital services by impersonating the impacted individuals.
“It is not possible to gain access to e-services, give a digital signature, or to perform different financial transactions (incl. bank transfers, purchase and sales transactions, notarial transactions, etc.) using a document photo, personal identification code, or name,” RIA Director General Margus Noormaa added.
“People whose document photos have been stolen need not apply for a new physical or digital document (passport, ID-card, residence permit card, mobile-ID or Smart-ID, etc.) or take a new document photo. All identity documents and photos remain valid.”
All impacted individuals to be notified via email
Although the vulnerability was introduced in the system and could’ve been exploited several years ago, current evidence doesn’t show that such an attack has happened since then.
RIA also said that the data was not transferred from the suspect’s computer after it was stolen from KMAIS, and there is no reason to believe that it was misused in any way.
All Estonian citizens who had their ID scans and personal information stolen during the incident will be notified via email by the Estonian Police and Border Guard Board.
RIA added that this incident is not connected with another breach disclosed earlier this month when the personal data of over 300,000 people was exposed on the Eesti.ee state portal’s access rights management system.


Senate Bill Proposes Further Restrictions on Huawei, ZTE

Cyberwarfare / Nation-State Attacks
,
Endpoint Security
,
Fraud Management & Cybercrime

Legislation Would Prohibit Using Stimulus Funds to Buy Companies’ Gear

Scott Ferguson (Ferguson_Writes) •
July 29, 2021    

Sens. Tom Cotton (left) and Mark Warner

Two U.S. senators are looking to place additional restrictions on the use of telecom equipment from Chinese equipment manufacturers Huawei and ZTE by prohibiting using funds from the $1.9 trillion American Rescue Plan stimulus package to buy such equipment.See Also: Live Webinar | Improve Cloud Threat Detection and Response using the MITRE ATT&CK Framework
A bill calling for the ban, the American Telecommunications Security Act, was introduced Wednesday by Sen. Mark Warner, D-Va., who is the chair of the Senate Intelligence Committee and Sen. Tom Cotton, R-Ark., who serves on the committee.
In June 2020, the Federal Communications Commission officially designated Huawei and ZTE as national security threats, noting that equipment from the two companies could be used to spy on communications on behalf of the Chinese government.
That designation by the FCC means that smaller U.S. telecom companies and wireless carriers can no longer tap into the FCC’s $8.3 billion Universal Service Fund to buy equipment from Huawei and ZTE. The commission has also ordered smaller carriers to remove this gear from their networks, with the government picking up some of the removal costs.
Earlier this month, the FCC finalized a $1.9 billion plan that will assist smaller, rural telecommunications carriers pay to rip and replace Huawei and ZTE technologies from their networks (see: FCC Finalizes Plan to Rip and Replace Chinese Telecom Gear).
The Commerce Department in 2019 put Huawei and ZTE on its “entity list,” which effectively blacklisted both companies from doing business in the U.S. The federal government also restricted Huawei’s ability to gain access to U.S. chip technology.
Further Restrictions
The bill that Cotton and Warner introduced would further restrict U.S. companies from using federal dollars to buy equipment from the two Chinese companies.
“American tax dollars should not be sent to Chinese spy companies like Huawei that undermine our national security. The U.S government must take strong action to cut the Chinese Communist Party out of our networks. Americans deserve both reliable and secure telecommunications technologies,” Cotton says.
Warner notes that while bringing reliable broadband and other telecom services to smaller, rural American cities and towns is a priority that is included in the American Rescue Plan “we’ve got to make sure no community is sacrificing network security.”
When the FCC designated the companies as national security threats last year, the commission noted that Huawei is reported to have received “vast subsidies” from the Chinese government, while ZTE violated the U.S. embargo on Iran by sending about $32 million worth of U.S. goods to that nation and by obstructing the Justice Department’s investigation into the matter.
Neither Huawei nor ZTE could be immediately reached for comment on Thursday. Both companies have denied that their equipment poses a threat to U.S. national security, and the two firms have tried, unsuccessfully, to appeal the FCC designation of them as national security threats.
Other Bills
Other lawmakers are supporting additional measures that would further restrict Huawei and ZTE from doing business with U.S. companies.
The House Committee on Energy and Commerce has approved the Secure Equipment Act of 2021, a bipartisan measure that would further instruct the FCC to prohibit the use of telecom equipment from Huawei and ZTE. While carriers are now prohibited from using public funds to help purchase certain Chinese telecom equipment, this bill would prohibit companies from using private dollars to purchase Huawei and ZTE gear as well (see: Congress Considers Measures to Improve Telecom Security).
A similar bill is making its way through the Senate.


Building a RESTful Service Using ASP.NET Core and dotConnect for PostgreSQL

The term REST is an abbreviation for Representational State Transfer. It is a software architectural style created to assist the design and development of the World Wide Web architecture. REST defines a set of constraints that define how a distributed hypermedia system, such as the Web, should be architected. Restful Web Services are HTTP-based, simple, lightweight, fast, scalable, and maintainable services that adhere to the REST architectural style.The REST architectural style views data and functionality as resources accessed via Uniform Resource Identifiers (URIs). Restful architecture is a client-server paradigm that utilizes a stateless communication protocol, often HTTP, for data exchange between server and client. In REST, the clients and servers interact through a defined and standardized interface.
This article looks at RESTful architecture and how we can implement a RESTful service using ASP.NET Core and dotConnect for PostgreSQL. In this article, we’ll connect to PostgreSQL using dotConnect for PostgreSQL which is high performance and enhanced data provider for PostgreSQL that is built on top of ADO.NET and can work on both connected and disconnected modes.
Prerequisites
To be able to work with the code examples demonstrated in this article, you should have the following installed in your system:

Visual Studio 2019 Community Edition
PostgreSQL
dotConnect for PostgreSQL

You can download .NET Core from here: 

You can download Visual Studio 2019 from here:

You can download PostgreSQL from here:

You can download a trial version of dotConnect for PostgreSQL from here: 

Create the Database
You can create a database using the pgadmin tool. To create a database using this Launch this tool, follow the steps given below:

Launch the pgadmin tool
Expand the Servers section
Select Databases
Right-click and click Create – > Database…
Specify the name of the database and leave the other options to their default values
Click Save to complete the process

Create a Database Table
Select and expand the database you just created
Select Schemas – > Tables
Right-click on Tables and select Create – > Table…

Specify the columns of the table as shown in Figure 2 below:

The table script is given below for your reference:

CREATE TABLE public.”Product”

(

“Id” bigint NOT NULL,

code character(5) COLLATE pg_catalog.”default” NOT NULL,

name character varying(100) COLLATE pg_catalog.”default” NOT NULL,

” quantity” bigint NOT NULL,

CONSTRAINT “Product_pkey” PRIMARY KEY (“Id”)

)

We’ll use this database in the subsequent sections of this article to demonstrate how we can work with Postgresql and dotConnect in ASP.NET Core.
Features and Benefits of dotConnect for PostgreSQL
Some of the key features of dotConnect for PostgreSQL include the following:

High performance
Fully-managed code
Seamless deployment
Support for the latest version of PostgreSQL
Support for .NET Framework, .NET Core and also .NET Compact Framework
Support for both connected and disconnected modes
Support for all data types of PostgreSQL
Improved data binding capabilities
Support for monitoring query execution

You can know more on the features of dotConnect for PostgreSQL here. The following are some of the advantages of dotConnect for PostgreSQL:

Enables writing efficient and optimized code
Comprehensive support for ADO.NET
Support for Entity Framework
Support for LinqConnect
Support for both connected and disconnected modes

Introducing dotConnect for PostgreSQL
dotConnect for PostgreSQL is a high-performance data provider for PostgreSQL built on ADO.NET technology. You can take advantage of the new approaches to building application architecture, boosting productivity and making it easier to create database applications. Formerly known as PostgreSQLDirect.NET, it is an improved data provider for PostgreSQL that provides a comprehensive solution for building PostgreSQL-based database applications.
A scalable data access solution for PostgreSQL, dotConnect for PostgreSQL was designed with a high degree of flexibility in mind. You can use it effectively in WinForms, ASP.NET, ASP.NET Core, two-tier, three-tier, and multi-tier applications. The dotConnect for PostgreSQL data provider may be used as a robust ADO.NET data source or an effective application development framework, depending on the edition you select.
Create a New ASP.NET Core Web API Project in Visual Studio 2019
Once you’ve installed the necessary software and/or tools needed to work with dotConnect for PostgreSQL, follow the steps mentioned in an earlier article “Working with Queries Using Entity Framework Core and Entity Developer” to create a new ASP.NET Core 5.0 project in Visual Studio 2019.
Install NuGet Package(s)
To work with dotConnect for PostgreSQL in ASP.NET Core 5, you should install the following package into your project:
Devart.Data.PostgreSql
You have two options for installing this package: either via the NuGet Package Manager or through the Package Manager Console Window by running the following command.
PM > Install-Package Devart.Data.PostgreSql
Programming dotConnect for PostgreSQL
This section talks about how you can work with dotConnect for PostgreSQL.
Create the Model
Create a class named Product with the following code in there:

public class Product

{

public int Id { get; set; }

public string Code { get; set; }

public string Name { get; set; }

public int Quantity { get; set; }

    }

This is our model class which we’ll use for storing and retrieving data.
Create the RESTful Endpoints
Create a new controller class in this project and name it as ProductController. Now replace the generated code with the following code in there:

[Route(“api/[controller]”)]

[ApiController]

public class ProductController : ControllerBase
{

[HttpGet]
public List Get()
{
throw new NotImplementedException();
}

[HttpPost]
public void Post([FromBody] Product product)
{
throw new NotImplementedException();
}

[HttpPut]
public void Put([FromBody] Product product)
{
throw new NotImplementedException();
}
}

As you can see, there are three RESTful endpoints in the ProductController class. Note the usage of the HTTP verbs in the controller methods. We’ll implement each of the controller methods shortly.
Insert Data Using dotConnect for PostgreSQL
The following code snippet can be used to insert data into the product table of the PostgreSQL database we created earlier:

[HttpPost]

public int Post([FromBody] Product product) {

try {

using(PgSqlConnection pgSqlConnection =

new PgSqlConnection(“User

Id=postgres;Password=sa123#;host=localhost;database=postgres;”)) {

using(PgSqlCommand cmd = new PgSqlCommand()) {

cmd.CommandText = “INSERT INTO public.product
(id, code, name, quantity)
VALUES (@id, @code,@name, @quantity)”;

cmd.Connection = pgSqlConnection;
cmd.Parameters.AddWithValue(“id”, product.Id);
cmd.Parameters.AddWithValue(“code”, product.Code);
cmd.Parameters.AddWithValue(“name”, product.Name);
cmd.Parameters.AddWithValue(“quantity”, product.Quantity);

if (pgSqlConnection.State != System.Data.ConnectionState.Open)
pgSqlConnection.Open();

return cmd.ExecuteNonQuery();
}
}
}

catch
{
throw;
}
}

Read Data Using dotConnect for PostgreSQL
Reading data using dotConnect is fairly straight forward. The following code snippet illustrates how you can read data from the product database table using dotConnect for PostgreSQL.

[HttpGet]

public List Get()
{

try {

List products = new List < Product > ();
using(PgSqlConnection pgSqlConnection =
new PgSqlConnection(“User
Id=postgres;Password=sa123#;host=localhost;database=postgres;”))
{

using(PgSqlCommand pgSqlCommand = new PgSqlCommand()) {

pgSqlCommand.CommandText = “Select * From public.Product”;
pgSqlCommand.Connection = pgSqlConnection;
if (pgSqlConnection.State != System.Data.ConnectionState.Open)
pgSqlConnection.Open();

using(PgSqlDataReader pgSqlReader = pgSqlCommand.ExecuteReader()) {
while (pgSqlReader.Read()) {
Product product = new Product();
product.Id = int.Parse(pgSqlReader.GetValue(0).ToString());
product.Code = pgSqlReader.GetValue(1).ToString();
product.Name = pgSqlReader.GetValue(2).ToString();
product.Quantity =
int.Parse(pgSqlReader.GetValue(3).ToString());
products.Add(product);
}
}
}
}

return products;
}

catch
{
throw;
}
}

Modify Data Using dotConnect for PostgreSQL
The following code listing illustrates how you can take advantage of dotConnect for PostgreSQL to modify an existing record:

[HttpPut(“{id}”)]

public void Put([FromBody] Product product) {

try {

using(PgSqlConnection pgSqlConnection =
new PgSqlConnection(“User
Id=postgres;Password=sa123#;host=localhost;database=postgres;”)) {

using(PgSqlCommand cmd = new PgSqlCommand()) {
cmd.CommandText = “UPDATE Product SET Name = @name WHERE Id = @id”;

cmd.Parameters.AddWithValue(“id”, product.Id);
cmd.Parameters.AddWithValue(“name”, product.Name);
cmd.Connection = pgSqlConnection;

if (pgSqlConnection.State != System.Data.ConnectionState.Open)
pgSqlConnection.Open();
cmd.ExecuteNonQuery();
}
}
}

catch
{
throw;
}
}

Summary
dotConnect for PostgreSQL is a high-performance object-relational mapper (ORM) for PostgreSQL built on top of the ADO.NET framework. It provides high-performance native connections to the PostgreSQL database. It offers new methods to building application architecture and increases developer productivity.


Getting Started With IaC

 IaC matters for three reasons. One is the transition to the cloud. More and more workloads are being moved from on-premises data centers to cloud environments. Nothing suggests that this trend is going to stop. However, cloud computing alone isn’t a panacea for maintaining scalable and reliable infrastructure. It’s just as possible to have an inconsistent, poorly documented set of scripts for cloud infrastructure as it is for a physical datacenter. IaC, because it enforces proven engineering practices, is how you make order out of the chaos.  The second reason is a greater sophistication in how people use the cloud. Companies are changing architectures, patterns, and ways of working to optimize the benefits they can get. It’s no longer simply CapEx versus OpEx. It’s about how to incorporate all the practices that make up the engineering lifecycle, such as versioning and testing to unlock all the value that the cloud can provide. It’s about using engineering practices to take advantage of the cloud’s potential and innovate faster to drive your business. The third reason is that the burden of managing infrastructure in the cloud is increasing. The number of cloud services available is growing every year and more companies are adopting modern cloud architectures (like containers or serverless), which often have many loosely-coupled and interdependent components. The result is that the number of cloud resources that people must manage is going up at a tremendous pace. This is certainly a good thing because it means companies are getting more value from the cloud to drive their business forward, but the consequence is an increase in complexity and scale.  For example, one way to get more value from the cloud is to take advantage of the ever-growing number of services that cloud vendors are providing. Those services can speed innovation and accelerate velocity but remember that with every new service comes new APIs. Each new service adds complexity to the infrastructure. Increased scale and complexity demand a modern approach to IaC to help you build, deploy, and manage your infrastructure. If you’re managing between 1 and 10 resources, point-and-click probably works fine. When you’re managing between 10 and 100 resources, then “infrastructure as text” or legacy IaC tools might still suffice. But what happens when you have hundreds or thousands of resources, which is not at all uncommon today? On top of that, those thousands of resources change not once a month but multiple times a day. A great way to manage all this is to put in place the same software engineering practices and tools that you use for application code. Ask yourself: 
How can I make sure my infrastructure scales, changes, and evolves rapidly enough to support the business and create a competitive advantage? 
How can I maintain visibility into my cloud infrastructure and any changes to it?  
How can I put in place the policies, security, and guardrails that will ensure safety and reliability? 
How can I best empower my teams to build, deploy, and manage infrastructure through better collaboration and processes? 
A modern approach to IaC is needed to address these questions. It is the critical tool needed to harness the modern cloud through tried-and-true software engineering practices applied to infrastructure. IaC is how we can harness the cloud’s potential.  

 The IaC platform you choose is critical. If your goal is to use standard software engineering tools and practices that are already in place, then look for the following qualities when you evaluate your choices. Standard Languages Support for standard languages means that your developers can define and configure infrastructure using the same languages used to write application code. For example, common languages like TypeScript, Go, Python, and C#. Many older IaC tools have their own domain-specific language (DSL), and these can be problematic. Developers often find that common programming constructs are missing. The platform you choose should allow engineers to easily create strongly typed, structured configurations and to use features they’ve always relied on such as loops, constants, and functions. Another advantage to using standard languages is, of course, that the developers already know it. They can begin coding right away. Learning the idiosyncrasies and limitations of a DSL can be time-consuming and frustrating. Standard Development ToolsUsing standard programming languages means that you can also use standard development tools such as IDEs. One advantage is, again, familiarity. Developers can work in an environment they already understand. The other is that developers can work in environments designed to help them easily author, debug, test, and deploy code. Testing FrameworksIt’s important that infrastructure is tested thoroughly, just as applications are. A modern IaC platform should support standard testing frameworks and it should also help your teams to expand the types of tests they perform.  Standard ops testing focuses on acceptance tests. That means the ops team spins up infrastructure in the cloud and they then test that infrastructure to see if it’s correct. Of course, if it wasn’t spun up correctly, the team needs to destroy and redeploy it. That’s not an optimal approach because, potentially, something that shouldn’t have happened already has, depending on how quickly the team reacts. A modern IaC platform should help your teams “shift risk left” through frequent testing before and during deployment. If they’re not already performing them, here are the types of tests your teams should be able to perform with a modern IaC platform. Unit Tests Unit tests evaluate the behavior of your infrastructure in isolation. External dependencies, such as databases, are replaced by mocks to check your resource configuration and responses. It’s possible to use mocks because responses from cloud providers are well known and tested. You already know how, given some parameters, the provider will respond.  Unit tests run in memory without any out-of-process calls, which makes them very fast. Use them for fast feedback loops during development. Unit tests really help you solve problems early in the lifecycle of your infrastructure.  Integration Tests Integration testing (also known as black-box testing) comes after unit testing, and it takes a different approach. Integration tests deploy cloud resources and validate their actual behavior — but in an ephemeral environment. An ephemeral environment is a short-lived environment that mimics a production environment. It’s often simpler and only includes the first-level dependencies of the code you’re testing. Once the integration tests are finished, you can destroy the ephemeral infrastructure. Security TestsToo often, security tests are left until the last minute, or code that’s considered “finished” gets thrown over the wall to a security team, who’ve been left out of the entire development process. The phrase “courting disaster” comes to mind when considering this approach.  First, a modern IaC platform should encrypt sensitive configuration data. It should also make it easy to follow standard security practices such as key rotation. Check to see if the platform you’re evaluating encrypts state metadata and ensures that secret values are never exposed in plain text. The platform should also integrate easily with security services offered by the cloud providers. In addition, as with other types of tests, the IaC platform should help you include security tests that you write yourself into your workflow. Just as you start testing your code early with unit tests, so should you start testing early to find security problems. Those tests belong in your CI/CD pipeline, so the infrastructure is thoroughly tested for vulnerabilities before it’s released. Creating Reusable Components Reusable components mean you build higher-level resources out of individual ones. With them, you can create useful abstractions that can be reused in other places. These components can be written with your company’s best practices built-in, tested, and shared within the company and with the community. Using reusable components helps to create repeatable, reliable infrastructure. Look to see if the platform you’re considering helps you create these components easily.  Standard Package ManagersIf you want to create reusable components, you’ll need a way to package them so you can share them easily. Along with using standard tools, you’ll want support for standard package managers. For example, you might want to put your component into a GitHub repo and publish it through NPM. Your IaC platform should make that a simple task. Creating Visibility Central visibility across all infrastructure resources, with a historical view of past changes, is important both for accountability and collaboration. Your platform should give you visibility across your infrastructure by supporting audit logs and the ability to see diffs when cloud resources change (similarly to how teams use collaborative tools such as Git). Additionally, the platform should allow you to set fine-grained controls so you can control who can access and change your infrastructure.  Support for Multiple Cloud VendorsNot every company wants to use multiple cloud vendors but it’s something you should consider. Do you want to leave that option open? If so, look for an IaC platform that won’t lock you into a single provider. Policy as Code Another too-often ignored facet of IaC is policy as code. A modern IaC platform should allow you to apply software engineering principles and approaches to your policies, just as it does with infrastructure. The benefits for policy as code are much the same as they are for infrastructure. Policies continuously enforce your organization’s cloud governance in terms of security, compliance, and cost controls. Policies are unambiguous, they can be written with standard languages and tools, they can be versioned, tested, and finally integrated into the CI/CD pipeline so all infrastructure follows the company’s best practices.  

Bringing a modern IaC platform into a startup or a company with many greenfield applications may not be difficult. For most companies, however, it’s not so straightforward. Many companies, both large and small, have a lot of infrastructure that was created by pointing and clicking in the console of a cloud provider. That’s how many new projects get started. Then, one day, an ops engineer wakes up and realizes that the new project is now production infrastructure. To make it more “official,” the team writes a run book or a wiki that describes what buttons to click when someone wants to perform a common task. Another common situation is that there are Bash or PowerShell scripts floating around that only one or two people know about. What do you do if that’s your situation?  Stay Calm Remember that change can be scary. Many people feel paralyzed when they think about touching their infrastructure. It’s too complicated and they don’t understand how it works. Take the time to build up your confidence. Define What Is Good  The first step, perhaps even before you begin to evaluate tools and approaches, is to define what “good” looks like to your company. Achieving that ideal depends on understanding what assumptions will remain true regardless of which tools you use.  A team made up of all the stakeholders is one way to define what your company wants to achieve with its cloud infrastructure.  Pick a Few Tools to EvaluateAfter thinking about the critical points listed above, narrow your search for the perfect platform down to a few candidates to evaluate. You might want to design a small project whose only purpose is to test the platform and see how well it helps you reach your goals. Import Existing Infrastructure Once you’ve selected a tool, try importing some existing infrastructure. If you’re working with the right platform, this should be straightforward. Integrate With Existing Engineering PracticesAssuming your infrastructure code is integrated with your continuous delivery pipeline, you can start instituting the same best practices you use with your application code.  Start Small Start with a new service or non-critical service—something that won’t disrupt your business if it fails. Pick a project where you’ll start seeing value early and then iterate.  

The next sections show examples of how an IaC platform can help you implement software engineering best practices. We’re using Pulumi as our IaC platform and TypeScript as the programming language.  Create Reusable Infrastructure Components  Reusable components are when you create a logical grouping of cloud resources that creates a larger, higher-level abstraction that encapsulates its implementation details. Here’s an example of how to create a reusable component for an s3 bucket.1. Create a file called s3folder.js. 

const aws = require(“@pulumi/aws”); 
const pulumi = require(“@pulumi/pulumi”); 

// Define a component for serving a static website on S3 
class S3Folder extends pulumi.ComponentResource { 
 
    constructor(bucketName, path, opts) { 
        // Register this component with name examples:S3Folder 
        super(“examples:S3Folder”, bucketName, {}, opts); 
        console.log(`Path where files would be uploaded: ${path}`); 
 
        // Create a bucket and expose a website index document 
        let siteBucket = new aws.s3.Bucket(bucketName, {}, 
            { parent: this } ); // specify resource parent 

        // Create a property for the bucket name that was created 
        this.bucketName = siteBucket.bucket, 
 
        // Register that we are done constructing the component 
        this.registerOutputs(); 
    } 
}

module.exports.S3Folder = S3Folder; 

2. Then, in index.js use this code (the same way you would use any other Node module):  

const s3folder = require(“./s3folder.js”); 

// Create an instance of the S3Folder component 
let folder = new s3folder.S3Folder(“s3-website-bucket”, “./www”); 

// Export output property of `folder` as a stack output 
exports.bucketName = folder.bucketName;

Implement Testing  Testing is a large subject. Here we’ll give an example of a unit test that makes sure instances have a Name tag, they don’t use an inline userData script but a virtual machine image, and that they don’t have SSH open to the Internet. This example uses the Mocha testing platform.  Here is the code to be tested:  

import * as aws from “@pulumi/aws”; 

export const group = new aws.ec2.SecurityGroup(“web-secgrp”, { 
    ingress: [ 
        { protocol: “tcp”, fromPort: 22, toPort: 22, cidrBlocks: [“0.0.0.0/0”] }, 
        { protocol: “tcp”, fromPort: 80, toPort: 80, cidrBlocks: [“0.0.0.0/0”] }, 
    ], 
}); 

const userData = `#!/bin/bash echo “Hello, World!” > index.html nohup python -m SimpleHTTPServer 80 &`; 
 
export const server = new aws.ec2.Instance(“web-server-www”, { 
    instanceType: “t2.micro”, 
    securityGroups: [ group.name ], // reference the group object above 
    ami: “ami-c55673a0”,            // AMI for us-east-2 (Ohio) 
    userData: userData,             // start a simple webserver 
});

This program violates all three conditions. First, create the mocks.  

import * as pulumi from “@pulumi/pulumi”; 

pulumi.runtime.setMocks({ 
    newResource: function(args: pulumi.runtime.MockResourceArgs): {id: string, state: any} { 
        return { 
            id: args.inputs.name + “_id”, 
            state: args.inputs, 
        }; 
    }, 
    call: function(args: MockCallArgs) { 
        return args.inputs; 
    }, 
});

Next, create the test.  

import * as pulumi from “@pulumi/pulumi”; 
import “mocha”; 

pulumi.runtime.setMocks({ 
    // … mocks as shown above 
}); 
 
describe(“Infrastructure”, function() { 
    let infra: typeof import(“../index”); 
 
    before(async function() { 
        // It’s important to import the program _after_ the mocks are defined. 
        infra = await import(“../index”); 
    }) 

    describe(“#server”, function() { 
        // TODO(check 1): Instances have a Name tag. 
        // TODO(check 2): Instances must not use an inline userData script. 
    }); 

    describe(“#group”, function() { 
        // TODO(check 3): Instances must not have SSH open to the Internet. 
    });

 Next, implement the first test, which checks that instances have Name tags. 

// check 1: Instances have a Name tag. 
it(“must have a name tag”, function(done) { 
    pulumi.all([infra.server.urn, infra.server.tags]).apply(([urn, tags]) = > { 
        if (!tags || !tags[“Name”]) { 
            done(new Error(`Missing a name tag on server ${urn}`)); 
        } else { 
            done(); 
        } 
    }); 
});

 Following this pattern, you can write the other two tests yourself.  Implement Policy as Code Decide on the policies and security requirements that should hold true for the entire organization. Those policies should also be a part of your CI/CD pipeline, just as your other tests are.  The following example is a policy that prevents access to data in S3. 

import * as aws from “@pulumi/aws”; 
import { PolicyPack, ReportViolation, validateResourceOfType } from “@pulumi/policy”; 

new PolicyPack(“policy-pack-typescript”, { 
    policies: [{ 
        name: “s3-no-public-read”, 
        description: “Prohibits setting the publicRead or publicReadWrite permission on AWS S3 buckets.”, 
        enforcementLevel: “mandatory”, 
        validateResource: validateResourceOfType(aws.s3.Bucket, (bucket, args, reportViolation) = > { 
            if (bucket.acl === “public-read” || bucket.acl === “public-read-write”) { 
                reportViolation( 
                    “You cannot set public-read or public-read-write on an S3 bucket. ” + 
                    “Read more about ACLs here: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html”); 
            } 
        }), 
    }], 
}); 
 
        }, 
    }], 
}); 

interface AwsTagsPolicyConfig { 
    requiredTags?: string[]; 
}

A modern approach to IaC is a great way to reduce cloud complexity, unlock the potential of the modern cloud, and achieve faster innovation. With a modern IaC approach, you apply standard software engineering practices and tools to infrastructure, usually with an IaC platform that supports these practices. Briefly, here is a summary of the high-level benefits that you can expect. Increase Innovation, Velocity, and Agility With a modern IaC approach, teams can apply the same practices, testing rigor, and automation of modern software development to cloud infrastructure. This increases the rate and reliability of releases so that companies can react to customer feedback and iterate quickly. Decrease Infrastructure Risks Because developers can use standard testing frameworks, IaC “shifts risk left”. Early, frequent, and thorough testing can be a part of the authoring process and CI/CD pipeline. Since policy and security requirements are also written as code, compliance and safety are automatically tested with every deployment.   Foster Closer Collaboration Modern IaC platforms use standard tools and languages, which can break down silos between infrastructure, application development, and security teams. Using shared practices and tools increases collaboration between different teams.