Getting started with SignalR Alpha for ASP.NET Core 2.0! – My experience

This looks like a long post but it has lots of code and little explanations since SignalR is on an early stage. Stick until the end 🙂

Previously I’ve overheard about SignalR, which I’ve been told that it’s a library to allow “real time” communication with multiple web clients. My first thought was that it was some kind of message broker (like I’ve used before the RabbitMQ and MQTT protocol), but after a little reading and talking, I found out it’s much easier than that.

A couple of weeks ago, Microsoft announced SignalR (Alpha) for ASP.NET Core 2.0. This blog post is a really good entry point, and if you read it, you get a pretty straightforward idea of what’s new/different, like:

  • JavaScript/TypeScript Client
  • Support for Binary Protocols and Custom Protocols
  • Streaming
  • Scale-out

And many more things like “What’s Changed” (which I won’t cover because like I said, I’ve had zero experience with SignalR before this Alpha) and even the other topics won’t be covered here.

So what will you be talking about?

I will give you my approach/idea about how to work with SignalR in a scenario which is similar to my job’s needs.

My test case scenario and examples

First of all, keep in mind that I will not cover the javascript client, only the C# one. I’ll probably have the necessity to create an implementation on javascript, when I need to I’ll blog about it :).

In order to show a similar “real world” scenario, I’ll build a simple application simulating the following :

There are different type of workers on their workspace and they want to communicate between them. There are 3 groups:

  • Cool employees
  • Way Cooler Employees
  • Bosses

For obvious reasons, they only want to talk to the others on the same group (Cool employees with cool employees, etc etc).

To simulate this interaction we will have another application that simply perform 3 actions, according to correspondent button pressed:

  • C  key pressed – send message only to Cool employees
  • W key pressed – send message only to Way cooler employees
  • B  key pressed – send message only to Bosses

OK… so how does all of this work? Let’s try to visualize it:

Resuming: we want to have employees that belong to the “Cool Employees” group, others to the “Way Cooler Employees” group and the bosses that belong to the “Bosses”group. Have you noticed that I’ve been repeating the word “Groups”? It was with a purpose. With this post, I will show you how can you send messages to specific groups. But before jumping into it, let’s start from the beginning.

Hub

On the previous image, everything points to a hub, but what’s a hub? A hub is a “kind of” point-of-access (or bridge if you prefer) between the caller and the receiver(s). Having a quick look at this class:

It gives us access to properties like the Context, Clients and Groups, which we will explore next.

Starting with clients

The default Clients property is of the type IHubClients

This one, also implements an IHubClients interface, which has the following methods:

Since in this case T is of the type IClientProxy, we will have the following available:

We are getting somewhere! With this we already know that when sending a message, we will send it to a group and our code will be something like:

But how do we register a client on a group? We’ve seen that the hub has a Groups property.

Groups

Consists on a property of the type IGroupManager, which has two methods:

This is it! We already have the knowledge on how to register a client on a group and how to send a message to a specific group. Let’s implement our Hub. We need to allow each connecting client to register himself on a group, in order to receive only the messages that he wants to, and we need to create the method that will say “ok, I will send this information to this group”.

The Hub implementation

Please note that the Groups.AddAsync() method requires a connectionId, which in this case is the connectionId of the client that is registering himself on a group. To get the connectionId for the current client, we can use the Context.ConnectionId property.

A little parenthesis:

I had a doubt on how the groups were created: did I need to create if before trying to add a connection to it, or would it try to add and if the group did not exist it would be created? Due to the amazing community, it was easy to get some of my questions answered pretty quickly, and it even generated a little and quick discussion. Share your questions/ideas with the community, join ASP.NET slack. Here’s an example of what I’m talking about:

slack.pngAnother awesome “feature” of the open source, is that we can have things like this. Gurgen developed an android client library for SignalR that according to him:

“At the moment, it works only by websockets transport, but later I will add all other transports
The plus is that, this is only implementation of .net core signal r for android”

 

Ok, so before we can run our code and see things happening, there are two things that we need to add to the Startup.cs

With the routes.MapHub(“entryPoint”) we are specifying that in order to connect to the hub, the route must be ourAddress/entryPoint (in my case it was http://localhost:52846/entryPoint)

Testing what we’ve just created

In order to test this, we need to have a total of three projects: the project with the hub, one to simulate our clients (employees and bosses), and other to send the messages. We already know that our clients must register themselves on a group so that they can receive the desired messages.

In order to simulate the referred scenarios, on the project that simulates the employees and bosses, I decided to create Tasks, in which each simulate a client application, by connecting to the Hub, register itself on a group and setting the callback. Afterwards it just stays on a forever loop. With this in mind, some of the code that I will show isn’t obviously the best approach, but it’s just to make the simulation easier/doable.

Connect to the hub and prepare to receive the messages (employees and bosses project)

The connection itself is pretty straightforward, all we need to do is create a HubConnection object and call the StartAsync method. When the connection is established, we register ourselves on a group, by calling the RegisterConnectionOnGroup method that we created on the hub. For last, we set the callback, defining what should be done when we receive a message. Let’s have a look on the code that simulates a “Cool Employee” client:

NOTE: As I just mentioned above, the connection.StartAsync().GetAwaiter().GetResult() should not be done like that for obvious reasons (it will not be async at all), but honestly I couldn’t figure out how could I await that call inside a Task, because since I’m using Task.Factory.StartNew(new Action(…)) I can’t set the await on the method, because it would change the signature, hence it wouldn’t be an Action. In this case it is just to ensure that we invoke the RegisterConnectionOnGroup only after the connection is established. Do you know how to solve this? Please PM me or even better, comment down below!

The MessageSender project

This application just sits on a forever loop after establishing connection to the hub (via the same HubConnection logic) and “implements” the logic that I previously referred:

  • Send a message only to the bosses? Press B
  • Send a message only to the Cool Employees? Press C
  • Send a message only to the Way Cooler Employees? Press W

With this, we can finally test what was developed. First launch the project that has the Hub (SignalR project, in this case the EmployeeSignalR), then the SignalRClient1 which will create the various clients referred, and finally the MessageSender. Place the SignalRClient1 and MessageSenderconsoles side by side, and press the desired key on the MessageSender‘s console. See the magic happening!

Here’s an example:

Conclusion

For me, this was a whole new world since I’ve never really had contact with SignalR until I had created this project. Obviously, there might be some things that I said that are not exactly like i said, but as I stated at the beginning I tried to “give you my approach/idea about how to work with SignalR in a scenario which is similar to my job’s need”. Since none of us know everything, if you spot a mistake or you have another approach, please let me know!

A special thanks to David Fowler (b | t) for his help and quick replies to my questions! And also, thanks to Gurgen for his intervention on the explanation and his availability to talk to me about signalR.

You can download the full example from my github repo

Thank you very much for reading

Advertisements

PowerShell Modules Central – Share with community – What PowerShell modules are you using?

I think that this is an awesome initiative 🙂

Cláudio Silva's Blog

Like the blog post title states this is all about sharing with others! My idea is to share with the community which PowerShell modules you are using.

Let me introduce to you the PowerShell Modules Central

PowerShellModulesCentral is a GitHub repository that was founded as a central hub to a list of PowerShell modules that people know/use. Each module has a file describing its name, basic information about the module, as well as one or more blog posts/videos from people that have written about or used them.

This way we can reduce friction when people are starting out or are trying to solve similar problems.

Why?

When a new module appears on the PowerShell scene it can be difficult to advertise and gain mindshare among developers/end users who could be interested in it. There are also times when difficulties arise in finding if a good tool exists or not, if…

View original post 517 more words

#tsql2sday – NuGet, Visual Studio, PowerShell & dbatools

Of course that you heard the saying

“If the mountain will not come to Muhammad, then Muhammad must go to the mountain”.

Now replace mountain with NuGet and Muhammad with Powershell. Intrigued? Stick with me! 🙂

Before we dive in:

Since I’ve started to attend some community events, namely TugaIT (on this one I was also a volunteer) SqlSaturday,  and other local events, I’ve interacted with many people that are really cool and experienced and even became friends with some. One of those people is Rob Sewell, also known as sql dba with a beard (b | t), a man with an epic beard and an amazing personality. After seeing that he’s hosting this month’s “T-SQL Tuesday” I’ve decided to join.
The “challenge” (as I like to call it, since for me each blog post is a challenge) is simple:

“spend an hour or so with it and tell us how you got on and what and how you learned”

Ok so what have I done and how?

Just to give some context, I’m currently working with ASP.NET Core 2.0 and Entity Framework Core 2.0, using the “database first approach” meaning that whenever there’s a change on the database that changes the model (let’s say add a new table or a new column), there’s the need to reflect those changes on my .NET code.
The way of doing so is by using the Scaffold-DbContext that takes the connection string, the provider and outputdir as parameters (although outputdir is not mandatory, it’s highly recommended since you might want your model on a specific folder). With this you are set, just run the command specifying those parameters and you’ll see the changes reflected on you outputdir folder.

My problem:

There are some tables that I don’t want to be mapped on my .NET code (in my case those are the openiddict and Asp.NET Identity tables). Looking at the Scaffold-DbContext documentation, I can’t find a way to exclude the tables, but I can find a way to say which tables to include. It’s doable… if your database has something like 10 tables!
So, I thought: “How can I get all the tables on my database? And when I get them, how can I exclude those that should not be mapped?”. (please note that in this case, the tables have the same prefix, but we will see an example just ahead) In this post, I’ll be using the AdventureWorks2012 database.

dbatools to the rescue!

First time I’ve heard about dbatools was when Cláudio (b | t) told me that he started contributing to “this open source project developed by the community that aims to make the dba’s tasks easier”. Since that moment, I’ve been following their work, either via twitter or via slack where you can join too!

DO NOT GET INTIMIDATED BY THE NAME!

“That’s not for me, I’m not a DBA, I’m a backend developer”, “those commands must be for those database nerds, not for me”. Well, I’m also a backend developer who write some SQL queries, but there are some useful commands even for “non-dbas” as we will see.
Ok, so since dbatools is so easy to install (on windows 10 you just need to type Install-Module dbatools … ridiculously easy) let’s try to use the Scaffold-DbContext on the powershell console.
This cmdlet can be found on the Microsoft.EntityFrameworkCore.SqlServer package .Since the sintax to install the tools for Entity Framework core is just Install-Package Microsoft.EntityFrameworkCore.Tools, sounds like we can just open powershell and type it in.

Hum… not so easy. Well, the documentation tells us to open Visual Studio and open the Nuget Package Manager Console and type the command. Even more, here it’s stated that:

“The commands listed here are specific to the Package Manager Console in Visual Studio, and differ from the Package Management module commands that are available in a general PowerShell environment. Specifically, each environment has commands that are not available in the other, and commands with the same name may also differ in their specific arguments. “

There’s nothing left to do then because dbatools requires powershell and we can’t use the NuGet commands outside the Package Management Console, meaning we can’t use it on the powershell console.

That’s correct, but Microsoft does not say that you cannot use powershell commands on the Package Manager Console!

Let’s try it. Please pray so that it works!

installDbaToolsError.png

Oh no! It’s red! Chill out and read the error. It’s just stating that in order to install dbatools it must be executed as administrator. I’ve simply ran Visual Studio as administrator, but probably the scoped install would also work (thank you Ed Elliott (b | t ) for point that 🙂 ) Doing so, we get the following:

In my case, I already had it installed, but you should have no problems now.
Now that we have dbatools installed, how is it going to help us? After a quick search I’ve found the Get-DbaTable that, among other parameters, returns the table name. Using the Get-Help command on the console we can see that:

We just need to pass the instance, the database name and our credentials. After that, we select the name property. In order to securely provide your credentials, use the Get-Credential cmdlet. Look at the following example:

Since we can use native powershell cmdlets like the Where-Object, we can easily filter the tables that are prefixed. Let’s assume that we don’t want to include any table that starts with “Employee”. We just need to pipe the result from the previous command to Where-Object {!$_.StartsWith("Employee")} (here you can see my “backend vein”, using the “!” and the StartsWith where I could have used the “pure powershell” and done with -NOT and LIKE) which results in:

$tableNames = Get-DbaTable -SQLInstance localhost -Database AdventureWorks2012 -SqlCredential $myCredentials | Select-Object -ExpandProperty Name | Where-Object {!$_.StartsWith("Employee")}.

Remember that all of this started because Scaffold-DbContext can receive an argument that is an array of strings, which corresponds to the tables to be mapped. Let’s see if we can put all of this together and get the expected result:
scaffoldResult.png

As we can see at yellow, we are getting warning because of some “problems” on the mapping for employees

Take it to the next level:

To avoid extension on this post, I’ll write another one, showing how you can make it even easier and do the scaffold in only one command like this:

I hope that this help you, not only with Scaffold-DbContext but with any other NuGet cmdlets (if I can call it that way) that depend on Powershell cmdlets.

My first Microsoft’s documentation correction experience

Since that at my job I’m currently developing under a lot of “things” that are new to me ( ASP.NET Core 2.0, Entity Framework Core 2.0, OpenIddict, Azure, etc), lately most of my time is spent reading Microsoft’s documentation. Sometimes, when reading some of those documents, I find some typos. After reading this post from Cláudio Silva (t | b), I’ve felt happier because now I can contribute to help fix those typos! It’s really simple and you can contribute to! Check out his blog post explaining you how.

The Microsoft team is really friendly and makes you feel good for contributing, even if it’s the smallest thing! Just so you understand what I’m saying, check this out :

git

Although this was not my first contribution (https://github.com/aspnet/EntityFramework.Docs/pull/470), I’ve felt really good even with just one little correction 😊

So if you find a typo, or even something that you don’t think that’s well written, make your corrections and submit a PR!

EDIT:
I mean… you even get your Github’s avatar featured on the MS’ documentation site! Am I the only want that finds it super awesome?

feature.png

Thank you for reading!

[Powershell] – Watch for file changes and perform an action when it does

As you might notice, I don’t really like to do the same procedure more than a few times if there is a way to automate it, and usually there is, so I investigate it for sure.
Currently, I’m working in a project that has deploying process that, although it’s simple, really annoyed me. Until I automated it 😊
The deployment is done to a Raspberry Pi with Raspbian lite (without user interface). The “problem” here is that the solution is developed and built in Windows and then sent to the Raspberry via FTP. As I said, the process is simple: Compile the solution, go to an FTP client (I’m using FileZilla for example), navigate to \solutionFolder\bin\release\ and copy the application.exe to the destination folder on the raspberry.
Here I’m talking about early state development, where I compile and test something like 20 times per hour for example.
Wouldn’t it be nice if there was a way to, whenever the solution is compiled, copy the application.exe to the raspberry automatically? Well, since I’ve already said that I investigate for automation and now I’m asking this, obviously you know the answer.
Let’s see:
|___I’m using Windows
|_____Windows has PowerShell
|_______PowerShell uses .NET Framework
|_________.NET Framework as events

According to this documentation:

“An event is a message sent by an object to signal the occurrence of an action. (…) An event sender pushes a notification that an event has happened, and an event receiver receives that notification and defines a response to it”

That’s it!
Use events to detected whenever there is a change on the file and take some action with it. Hope this was useful for you. See you on the next blog post!

“Hum… this guy is kidding, right? He just throws something into the air and that’s it? No explanation, no help?”

Ahh I bet you just thought that! I’m just kidding. My objective whenever I write a post is to expose how I saw the problem and how did I solve it, hopping that it helps you or that you want to discuss other ways of doing so.
So, where were we? Ah the events. That is our entry point. Now we just need to find some way of raising an event whenever a file changes. I will help you get to it faster.

FileSystemWatcher

FileSystemWatcher is a .NET class that “Listens to the file system change notifications and raises events when a directory, or file in a directory, changes.” (documentation).

Let’s see what can we use:

GetMemberFileSystemWatcher

Ok, here is what deserves our attention (I’ve used the -Online switch to get help online):propertiesevents

Adapted from here

Well, we are ready! Let’s create our object to detect changes on a dummy file:

$watcher = New-Object IO.FileSystemWatcher;
$watcher.Path = "F:\danielSilva\Filewatcher\"
$watcher.EnableRaisingEvents=$true;
$watcher.Filter = "dummy_file.txt"

Oh, but how do we indicate the action to be performed when the file changes? Will this action be performed whenever the event is raised?

DING DING DING! MILLION DOLLAR QUESTION!

Remember this: “(…)and an event receiver receives that notification and defines a response to it”. We need to find a way to do this.

Let’s use the best cmdlet: Get-HelpgetHelp
Since we are talking about events, there must be something related with what we want. When I first used this command, I thought “Nice! New-Event cmdlet, that’s it!”. It is not, because this cmdlet creates a new event and we don’t need that because our object already does that. We need to do some kind of registration … There! You see?! Register-ObjectEvent, let’s see if it is useful for our scenario:getHelpRegister
For some reason, using the Get-Help Register-ObjectEvent does not give the description for this command, nor does it give the examples, if you know why please let me know in the comment section. Good thing is that we can use the -Online switch, which gives us the following description:
The Register-ObjectEvent cmdlet subscribes to events that are generated by .NET Framework objects on the local computer or on a remote computer.
When the subscribed event is raised, it is added to the event queue in your session“ (documentation)
Following the syntax above, our usage will be something like:

Register-ObjectEvent -InputObject $watcher -EventName $someName -Action $someAction

I’ll spare you the research (at least for now):

  • InputObject: is the object that has the events. In our case it will be our $watcher variable, which holds our object.
  • EventName: specify to which event do you want to subscribe. Please notice that must be a valid event name from the object used on the “InputObject” switch
  • Action: A scriptBlock where you define what should be done when the event is raised.

Resuming, our code will be like this:

$path = "F:\DEV\blog\Filewatcher"
$filename = "dummy_file.txt"
$watcher = New-Object IO.FileSystemWatcher;
$watcher.Path = $path;
$watcher.IncludeSubdirectories = $False
$watcher.EnableRaisingEvents = $True
$watcher.Filter = $filename
$changeAction = [scriptblock]::Create("Write-Host 'I have been summoned'")
Register-ObjectEvent $watcher -EventName "Changed" -Action $changeAction

Here we subscribed for an event raised when the “dummy_file.txt” changes. As an example, we will just write something and save. Let’s see it in action:commandExecution.gif
Wait, it printed twice. Why? My assumption and according to the event’s descriptionThe Changed event is raised when changes are made to the size, system attributes, last write time, last access time, or security permissions of a file or directory in the directory being monitored.”, it’s being called twice: once because the file size changes, and another time because so does the last write time. Now, because no one knows everything, here’s some behaviour that I didn’t yet understood well. I’ve set the property NotifyFilter to only raise event for the last write time, by doing the following:

$watcher.NotifyFilter = [System.IO.NotifyFilters]::LastWrite

CallTwice
As you can see, even after setting this property to only notify when the lastWrite occurs, it still prints twice. If you know what am I missing, please feel free to say so on the comments, or via any other contact.

Now this is it. You can adapt the code to your needs, but I think with this you can understand the core.
Only one remark: “When the subscribed event is raised, it is added to the event queue in your session.“ (documentation)
This means that this watcher will only work as long as the session is active, which also means that as soon as you close the window, the watch dies.

Applying what we have just done to the previously described case

Although I will not do the building and use the .exe file, the behavior is the same. When the file changes, it will be deployed on raspberry. The way the file is transfered is the same that I used on my previous post. Here’s how the code looks like:

$piIp = "192.168.1.90"
$path = "F:\DEV\blog\Filewatcher\"
$filename = "dummy_file.txt"
$fileFullPath = $path+$filename
$destFolder = "/home/pi/Documents"
$myPiPassword = Get-Content -Path "F:\DEV\blog\myPiPassword.txt"
$watcher = New-Object IO.FileSystemWatcher;
$watcher.Path = $path;
$watcher.IncludeSubdirectories = $False
$watcher.EnableRaisingEvents = $True
$watcher.NotifyFilter = [System.IO.NotifyFilters]::LastWrite
$watcher.Filter = $filename
$changeAction = [scriptblock]::Create("pscp -l pi -pw $($myPiPassword) $($fileFullPath) pi@$($PiIp):$($destFolder)")
Register-ObjectEvent $watcher -EventName "Changed" -Action $changeAction

And here is the code in action:

finalExample.gif

In resume

So, this is the way that I found to be aware of file changes and take some action whenever it does. If you want to take some action when something occurs, my guess is that the Events and Register-ObjectEvents might be a good entry point. Have you ever been in a situation like this one? Please share your experience on the comment section.
Thank you for reading!

Fetch me the Mac Address of those 17 Raspberry please!

So, like the title states, I was recently asked to get the Mac address of multiple Raspberry Pi and write them down on a paper and send an email. Keep in mind that all of them will have the same operating system image (I’ve used this excellent program), which means that I only must configure one.

The first idea

At the beginning someone said: “Well, when we turn on the pi to test if it’s good, we run the command to get the mac address, it does not take that long”. I thought “Hum well ok, we can probably do that”. But even so I started wondering, there are two ways of doing so:

  • Plug a monitor, keyboard and mouse to the raspberry -> write the command to get the Mac Address -> write it down to a paper and on my computer.
  • Connect via SSH, run the command write it down to a paper and copy the output to my computer.

I think that, for obvious reasons, the first method is out of question… right!? Let’s assume that we have a way to know the IP of each Raspberry (that’s the part that I need to improve). For this scenario, I used this android app but I’ll discuss this later.

Ok, so we go with the second way!

 “NO GOD! PLEASE NO!!!”! Hold your horses! Are you even thinking about what you are saying!? Are you crazy?!

“Hum Daniel, it’s only 17… it’s not that much, come on”

Well for my little experience, it is NEVER “only”! Even if it is, let’s see an example to the the Mac Address of one raspberry, and in this example I did not mistyped the password nor the IP (you can click on the image to open on a new window):

piGif

Now you do it 16 more times if you want.

Meme

Now my thought

Since I must configure only one, I can setup a script that run on first boot, get only the Mac Address and write it to a file. Let’s see how can we do this:

Here we create a dummy file that will serve as a flag. This way, on first boot the file does not exist so it gets the Mac address (notice that I’m using the hostname as the name of the file, to uniquely identify them) and create the file.

But even this is not enough, since I will have to connect to each via FTP and download the file unless… I can find something to do this for me.

And that something is PowerShell

Putty Secure Copy Client (PSCP) will be our friend, along with PowerShell. Note that PSCP is not a PowerShell module or cmdlet, it’s an executable. At first, I honestly thought that with pscp I could only send files from my current machine to another one, but turns out I can do the reverse. Remember the “connect to each via FTP and download the file” part? It’s solved right here!

Usage:

pscp [options] [user@]host:source target

(for more info, just type PSCP on command line or powerShell)

Our usage will be:

pscp -l pi -pw mypassword pi@$XXX.XXX.X.XX:/path/on/raspberry/*.txt path\on\windows\

In this case, I’m getting all the .txt files. Please note that I’m using -pw to specify my password to avoid being prompted each time about my password. As obvious, NEVER provide a script with passwords in plain text. Since all the raspberries have been cloned, their username and password is the same (this will be changed afterwards, but not covered here), the only thing that varies here is the IP. Let’s compose this on a PowerShell script:

(you can add more elements to $ips, using the “,”
Quicktip: If all IPs are contiguous you can use the following:

$ips = 1..20 | % { "192.168.1.$_" }

where 1..20 means it will create IPs from 1 to 20.
As you can see, it’s a pretty simple script: We have all our IPs in an array, the path of the raspberry folder where the MacAddress file is allocated, our Windows destination folder and our password (which is not safe either, but this is another matter).
I use the ScriptBlock.Create static method so that I can use the variables on the pscp command (I’ll have to study better variable scopes on PowerShell) and afterwards use the Invoke-Command cmdlet saying like “Please run this script block”.
Let’s see this in action (again, you can click on the image to open on a new window):

vs2

What I’m showing here is that before we run the command, there is no “Raspberry1.txt” file, which is the file that contains the Mac Address, and after a few seconds, there it is. In order to “prove” that both methods return the same, I compare the content of the file that I’ve created on the previous gif, with the one obtained via script.

Ways to get the IPs and probably other way to do this specific scenario

I didn’t talk much about how to get the IPs because networking is not really my area. I found out about the Address Resolution Protocol (ARP), when used with the -a switch list the IP and physical address of all the devices connected to the network. I’ve also found that using ping -a XXX.XXX.X.XX resolves the hostname for that IP. One idea could be iterate over all the ARP table entries, ping the IPs with the -a switch so that we know which IPs belong to raspberry and then match the IPs with the physical address. Since the ARP and Ping are not PowerShell cmdlets (I know that there are Networking Modules for PowerShell), I found it hard to manipulate the outputs and didn’t spent much time with that solution.

Thank you for reading and please, share your ideas and opinions with me on the comments!

What am I doing!?

So after a great talk with Eugene (b|t)  and a moment of reflection, I’ve came to the conclusion that I should try to write some content about things that I see, things that I want to do and share my personal opinion about different subjects. Even tho I’m only starting (this blog and my career/personal programming objectives), my main subjects will be:

  • Cyber Security – mostly share articles that I read;
  • Powershell – New things that I learn;
  • RaspberryPi – New projects that I see or ideas that I have;

Any suggestion is really appreciated and will help me improve 🙂

Let’s see to where this leads me!