Category: Uncategorized

  • Investigate processors

    Since the main job of a computer is to compute, the main “brain” (called a microprocessor) on a computer is the “central processing unit” or CPU. There are other chips in a computer all attached to a circuit board called the motherboard. But it’s the CPU that does most of the thinking when it comes to how a computer works.

    The Central Processing Unit

    All computers need at least one CPU–even a smartwatch. Modern CPUs are built to think about billions of commands per second. This gives them a lot of power to do really hard things in less than a second.

    Illustration of a CPU like brain.

    How a Computer Thinks

    All computer problems can be broken down into a very simple language, a digital language: 0 and 1. This means that the computer turns lots of switches on and off very fast. This simple idea was discovered by a man named Alan Turing and his machine is called a Turing Machine.

    In many ways, a light switch is a digital machine. The light switch can be on or off. We can tell whether it’s on or off just by looking at it. The light switch is simple, but imagine many light switches working together. That’s what a computer does.

    Digital Processing

    All modern computers are digital machines turning switches–lots of switches–on and off very rapidly to do all the magical things a computer does. It turns those billions of on and off switches into sights, sounds, words, and even motion.

    One More Processor

    While the CPU is the main brain in a modern computer, there’s another rising star worth talking about. The graphics processing unit (GPU) makes all those cool graphics you see on screen. But it’s also very fast. If you thought the CPU was fast, graphics processors are even faster.

    GPUs were first designed to create all those wonderful things you see on the screen. Because it had to handle graphic processing, which is very complex, the GPU had to be very fast. So scientists came up with new chips to handle graphics. When things get fast, they also tend to cost more money. This means that graphics chips are used only for special jobs.

    Illustration of scientists working on a hard problem like climate or genetics in a lab.

    Though the GPU had this one job to do at first, scientists started seeing other uses for the GPU. Because of its speed, some people started using the GPU for very complex math problems. Mining something like Bitcoin has been a primary use, but scientists are using GPUs to solve complex problems about the weather, for example.

    The GPU will come up in topics about cloud computing. The GPU is a resource many cloud providers are turning to in order to help speed up computing over the internet.

    https://cosmicnext.com/software-development

  • Explore personal computers

    A popular Disney movie from the 1980s, TRON, attempted to explore the inner workings of a computer system by putting a human explorer in the computer itself. Though dated today, the movie enabled people to see inside a computer in a fun way.

    This unit will cover what makes up a personal computer (PC) without all the hassle of having to become digital and fly around inside of one.

    https://learn-video.azurefd.net/vod/player?id=2fcc8ac4-4e50-43e0-8fdb-a6183e516b15&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fexplore-computers%2F2-explore-what-personal-computer

    Basic Items

    There are certain items that a computer must have and those things that most computers in fact do have. When you think of a personal computer, you may think of a laptop or desktop computer. These types of things generally get the label “PC.” In this module, we’ll use the term “computer” to include a laptop or desktop but also things like a mobile phone, tablet, and even a gaming device.

    The Must-Haves

    All modern computers, in order to fit the name “computer,” include at least two things: a processor and memory. We’ll talk about these items in more detail in the next unit. When you think about the name “computer,” the main thing it does is compute. In order to compute, it has to have a “brain” that does the thinking and somewhere to keep the facts it uses to compute. This is what the processor and memory do in computers.

    The May-Haves

    Most computers these days also have long-term storage (a place where the device can access data over time) and a network adapter for talking to other computers. As the parts that make up these things have gotten smaller, these additional parts can be made to fit into things as small as a light bulb.

    Putting It Together

    These four parts make up the basic parts most modern computers need. The processor does the thinking. The memory remembers items to help the processor think. The storage keeps things around for later. And the computer uses network adapters to talk to other computers.

    Now that you have the basics, let’s dig into each of these items a bit more.

    https://cosmicnext.com/school-management

  • Explore accessibility

    Like many modern devices, computers were largely designed around most people’s abilities. People needed to see to use a monitor. They needed to use their hands to use a mouse and keyboard. They needed to be able to hear the sounds the computer made. Many times, designing for the majority of people makes sense at the start. But over time, opening up designs to as many people as possible is essential.

    Accessibility is the term scientists use for how easy or difficult it is to use something. The term access is built in. As programmers think about accessibility, access for all should be the goal.

    Access for All

    Making computers that everyone can use takes a lot of thought. Some computers are made specifically for people with unique needs. Sometimes, tools are made to help those with unique requirements use standard computers. Programmers need to build programs that work for both types of computers. In the past, as the industry was just forming, accessibility was an afterthought. Today, it’s being written into the code.

    Here are some examples of areas that computer scientists have had to think about to make computers more accessible.

    Vision

    People with low vision may need to enlarge the text on the screen. Doing so may impact everything else that’s being shown. Suppose the text on a button gets bigger, but the button doesn’t. The text may go outside the button, making it hard to read. So the program needs to adjust everything when some parts are enlarged.

    Diagram showing an eyeball.

    Color is another consideration. People with color vision deficiency can’t rely on color alone to tell the difference between things on a screen. Programmers shouldn’t rely on color alone to communicate important information.

    How items contrast with each other can be a challenge for certain people. People with light sensitivity may not be able to see a light color on top of a white background. Ensuring items have good contrast can help make a computer more accessible.

    Individuals that are visually impaired also rely on screen readers to access content on their computers. Screen readers read every item of text on the screen and help people get around. Items like images can’t be read, so describing an image in text is important. Images should include alt text. This text can be added to an image so a screen reader can read it.

    Hearing

    While people may tend to think of computing as mainly visual, modern devices use a lot of audio too. Programmers should never rely on audio only to communicate to the user. For example, most modern phones will have the option to vibrate and ring when a call comes in. This vibration option is called haptics.

    Diagram showing a vibrating mobile phone.

    Some devices enable haptics for other types of things too. Pressing a button on a keyboard may include a slight vibration. This gives people with hearing loss a connection to the keys they’re pressing.

    Motor Skills

    People with limited motor skills can have trouble using a mouse or keyboard. Modern computers have many tools to help. Using their voice to type (speech-to-text) can help a person avoid needing a keyboard. Tools like eye-tracking software and hardware can help those who are unable to use their arms or hands get around a screen.

    It’s in the Code

    Many of these tools are becoming more available. Still, it’s up to programmers to take advantage of them. Thankfully, modern programming tools are making this easier. Some even have tests that can be run to help the programmer tell how accessible their program is and where it can improve.

    Helping people with different requirements is a good enough reason to make programs and computers accessible. However, building accessibility into programs can help everyone. Many tools, like haptics, have a special benefit for specific people but may be useful for everyone.

    https://cosmicnext.com/quote

  • Inspect modern cloud structure

    Programming computers has changed a lot since computers first were made. Most people rarely use large programs that do everything. Instead, programs have gotten smaller and call other programs on the internet when they need to. Programmers can spread or “distribute” the work programs need to do around the internet.

    The same kind of model can be used for hardware like the CPU, hard disks, and memory. The cloud has made this possible.

    Spread around the Cloud

    Just as software can be broken apart and linked together, the hardware that makes up the computers that run the cloud can be separated. If you have a desktop or a laptop, it has everything it needs to work. It has a central processor. It has memory and storage. It has a screen and a keyboard. It has a network adapter and Bluetooth. The computer works as a single unit.

    In the cloud, things work differently. Like modern programs, the functions of a computer can be isolated. The storage (the disks) can be separated from the brains or central processing unit. Even the networking function can be separate from the rest of the functions.

    Diagram showing a single computer with all the components separated out.

    More Power

    Separating these functions makes it easier to add power to specific functions when they’re needed. For example, if you need to solve really hard math problems, you may need more brain power. In the cloud, you can choose to increase the amount of calculating power available to the work you’re doing. If you’re collecting a lot of data, you may need more storage. Cloud computing enables you to add (or reduce) storage as needed.

    Suppose you have a small car that does fine for driving to and from work. If you need power and storage to haul something, you may have to borrow or rent a truck. If cars worked like the cloud, you could add to the size and power of your car when you need to haul an appliance or a few yards of dirt. When you’re done, you can go back to your commuter car.

    Diagram showing vehicles that go from smallest at the left to largest at the right.

    The cloud scales resources like this but goes even further. The cloud supports autoscaling, which means it can provide more space and power on-demand, or when it’s needed. It monitors the work being done (called workloads), and if more power or storage is needed, it adds it. When the need goes away, it scales things down.

    This type of hardware and device model works hand-in-hand with modern software programming. This type of computing enables people to use only what they need when they need it. It also means people don’t have to buy computers and write programs they don’t need. They can purchase the power as they need it and stop using it when they don’t.

    https://cosmicnext.com/mobile-app-development

  • Investigate modern programming

    Computer programs and the way they get to users have changed since computers were first invented. Early programs were self-contained and delivered as a complete package. Many programs had a lot of features that people paid for whether they used them or not.

    Today people call a lot of programs “apps” (short for application). Programmers can make apps that do a few things but work well with other apps. Also, the internet has changed how apps work, communicate, and even get updated.

    The Role of the Web

    In the past, everything a program needed to work had to be included in or with the program. Today, instead of having to include all the needed code with the application, developers can use code running on the internet to get work done. Many devices are connected all the time. When a program needs something (data for example), it can get the data from a program or database on the internet. This means that data doesn’t have to be included in the program.

    Here’s an analogy. Suppose, in a time before mobile phones, you were on a television show where you had to answer quiz questions. You can bring any people or books you need to answer questions on five different topics that you’re given ahead of time. You can’t call anyone to ask questions or to look things up. In order to be able to answer questions, you’d have to bring all the experts and books with you to the show.

    Diagram showing a person taking on online quiz.

    Now add a mobile phone. You don’t need to bring all those things with you. If you need to ask someone a question, you call them. If you want to look something up in a book, you look it up on the internet. You only access the information, people, and books when you need them.

    Here’s the other cool fact about this. If you don’t need to use any of those resources, you don’t make a call or look anything up. Not only did you not have to waste time and energy lugging a bunch of books around you didn’t need. You also never even have to pick up the phone because you never needed to.

    Small Code

    Modern app programming works similarly. Small chunks of code, called functions, are on the internet and can be used securely to do work. If you have a banking app on your mobile device, your bank doesn’t have to include all your banking data on your phone. Rather, the app can securely call your bank’s database when it needs the data to show you and delete it from your phone when it doesn’t. This keeps your phone free from having to store too much data.

    These little programs are called microservices. They’re a service because they do work. They’re micro because each bit of code tends to do one job. One service may get data. Another service may log you into a website. A third service may change your username or password.

    Diagram showing digital puzzle pieces.

    By breaking code up in this way, it means programs can do more with less. A small base program can be extended by calling other programs on the internet. This makes the base program smaller, which means it uses fewer resources on your device, preserving your battery and saving space on your device.

    https://cosmicnext.com/managed-it-services

  • Explore programming history

    The first computers could only do a single thing. As computers got more powerful and expensive, people wanted to be able to do more with them. Being able to tell a computer to do a wide range of things would make them much more useful.

    To enable this flexibility, the programmable computer was born. Programs are instructions that tell a computer what to do and when to do it. The instructions tell the computer what to show on a screen or what sound to make. Programs are made to respond to what the person using the computer (the user) types, says, or selects.

    Diagram showing a section of code.

    We tend to give instructions to humans in a language like English or Spanish. Computer programs have a language too. Just as you say, write, or sign words to a human, you can type (or speak) a program using the language the computer understands.

    Early Programming

    When computers started going mainstream, programs were delivered to customers on mobile disks. These disks, called floppy disks, had the entire program on them.

    For example, a spreadsheet program like Microsoft Excel would be on one or more floppy disks. Customers would buy and then “load” the program onto their computer. The whole program had to be on the disks for Microsoft Excel to work.

    This worked pretty well, but there were a few problems. If a problem in the code was found (called a “bug”), the only way for the customer to get the problem fixed was to get an update. The updates also came on floppy disks that people would get in the mail or at a store. Many times, the updates had to reinstall the entire program with the bug fixed.

    Another problem was that programmers couldn’t update the programs without selling a new version. While this could be good for business, it meant that people had to wait months or years for the new version to come out.

    Diagram showing a person waiting.

    A third problem is that early programs seemed bloated to some people. This means that the program had to include anything that someone might want to do with it. Customers had to pay for the full program with all the features whether they used everything or not.

    Big Changes

    This type of programming was used for many years. Over time, however, a different approach was developed. People thought, “What if you could break computer programs into parts that did just one or two things?” Those parts could then be linked together as needed. This is the idea behind modern programming.

    Today programs are delivered over the internet and many times are updated without you knowing about it. Programs have kind of disappeared into the background. But they’re more important than ever. The way of making them has changed, but they continue to be the heart of computing.

    https://cosmicnext.com/loyalty

  • Research computing history

    Computers and computing have advanced at amazing speed. It’s hard to believe that less than 50 years ago, there were no such things as cell phones or the internet, at least not for the average person. In order to better understand how to program computers, we’ll take a look at where they came from. This will help you better understand why computers work the way they do. Knowing how a computer works can help improve the programs that run them.

    Main What?

    Computers were not always as small and mobile as they are now. Today’s computers can do everything from intricate drawings to navigating complex roadways. People can do those things with the small device in their pocket.

    The first powerful computers were the size of a room and many of them did a single task. The British Colossus code-breaking computer of the 1940s is a good example. Another is the IBM 7094. These big computers were used in the Mercury and Gemini space programs in the United States.

    Diagram showing a large cabinet, mainframe computer.

    These huge machines were called “mainframes” because of the cabinets (or frames) they were stored in. Some mainframes were programmed with switches. Others were fed tape that told them what to compute. In most cases, these machines did complex math to help humans save time. Programming in early computer science was difficult and time-consuming.

    Shrinking Devices

    As technology moved forward and the transistor was invented, computers got smaller. One of the early computers that many people could buy was a hand-held calculator. This device did only one thing (simple math), but they were pretty affordable and portable. These evolved into computers with screens and the ability to run programs. These changes made computers more flexible and powerful.

    As computers evolved, the computing industry grew quickly. Within 10-15 years, computers were showing up everywhere, from homes to businesses. Over time, they moved beyond being a hobby to being an essential part of work and life for many people.

    As computers became an essential part of life, a problem arose. If something on the computer broke or went wrong, people couldn’t get to the data or tools they needed. For example, suppose a business relied on a computer to take customer payments. If that business had one computer that did everything and it broke down, customers wouldn’t be able to pay for their goods. This could seriously hurt business.

    In the early days of modern computing, this happened frequently. So people started making backups. Backups are copies of the data on a computer so it can be put on another computer if needed. Businesses (and some homes) began using multiple computers too, in case one failed.

    Diagram showing tape backup disks.

    Today’s modern computers are much more resilient. New approaches to programming also make computing less prone to fail. For example, many modern programs automatically save work as it’s being created. If the computer program fails, the data isn’t lost.

    A Long Way

    Computers clearly have come a long way. Mainframes were a big change in how people did complex work. Smaller “microcomputers” brought computing to many more people and made them mainstream. As computers became more important, they had to get tougher. The next big change in computing is happening now. It’s distributing the work computers do to the cloud making things more reliable and powerful.

    https://cosmicnext.com/it-support

  • Understand bad actors

    There always seem to be people who want to use good things for bad ends. Computers and the internet have a lot of power for good. Unfortunately, some people want to use that power to cause harm.

    This is partly why security is so important. But it also means computer programmers have to take extra care to build security into their systems. Security is something we all have to focus on.

    Phishing

    Phishing is when someone tries to get personal data by hiding who they really are. A person may visit a site or get an email that looks real. For example, you may get an email from your favorite shopping site. It may have the company’s logo and all the details at the bottom that look like it came from the store. The mail says that it’s time for you to update your account information. It provides a link for you to do just that.

    Illustration of money, credit cards, files, and folders on fish hooks.

    When you click the link, it takes you to a web page that looks like your shopping site. But it isn’t. It has been made to look like it, but it’s just there to get your information. This is called a phishing attack.

    There are many ways you can protect yourself from phishing attacks. One way is to check all links before you click them. Most modern desktop browsers and email tools help you with this. If you rest your mouse over a link and wait for a second or two, the tool will show you the actual link. If the text you see says it takes you to www.relecloud.com, but the actual link is to some website you don’t know, be careful!

    The link exposure tool is one way programmers have developed to help protect users. Many computer companies also do background checks on web links. They keep databases of links that can cause harm. When you try to click a potentially bad link, the software can warn you.

    Hacking

    Hackers come in many shapes and sizes. There are “black hat” hackers that are after money or to cause harm. There are “white hat” hackers that can break into systems like any other hacker but have noble goals (like exposing weaknesses). There even are gray hat hackers that sometimes break the rules but generally have good intentions.

    Illustration of a black hat, a white hat, and a grey hat.

    The black hats are the ones to be most concerned about. Hackers have the ability to break into computer systems by violating security. They may use tools to test thousands of passwords a minute until they find one that works. They write code that can be injected into computers to gain access. They also can write code in programs that seem innocent (like a game) but are designed to steal information.

    Many of the security tools you see today are designed to prevent hacking. Many of them work well, but hackers can still find a way to work around them. Being safe involves trusting software from reputable companies. But it also means being vigilant and aware of what you’re doing.

    Programming for Safety

    Becoming a good programmer means partly designing programs with security in mind. There are many “best practices” programmers use to do this. Many of the modern tools programmers use also help make them aware of potential issues. Finally, lots of testing can help catch bugs and issues that leave a program open to hackers.

    https://cosmicnext.com/it-consulting

  • Inspect biometrics

    The term “biometric” refers to using a person’s body (their biology) as a security tool. One example is a fingerprint. Fingerprints are unique. Law enforcement uses them to identify people because they’re hard to copy. Computer scientists have developed tools that can read fingerprints. They can use them to secure computers.

    Beyond Passwords

    Passwords used with another tool like a mobile phone are pretty secure. Many computer scientists believe biometrics are even stronger. A lot of devices now use biometrics alone to sign in a user. On your mobile device, you may rely on face ID or a fingerprint to get access.

    Illustration of a fingerprint.

    Modern tools that use this type of identity are advanced. Using a face for ID involves a number of things. For example, these tools may measure the distance between the center of their eyes. It also may examine the shape of their mouth. The size of their forehead may play a factor. All of these pieces of data are used together to identify them.

    The use of a fingerprint or face for identification is so strong that it can be used all by itself. Door lock makers, for example, are putting fingerprint readers on their locks. You can use your fingerprint alone to gain access to your house.

    The Future of Biometrics and Programming

    As biometric systems get more advanced, using voice or patterns within the eye may be a way to sign in. Computer programmers may have access to these security tools to use in their software. Some companies are making these tools available now.

    There are some concerns about the use of biometrics. Privacy is one. A public camera that is able to identify people without their knowledge may violate privacy. Lawmakers are working to ensure that this type of identity is used safely.

    Any programmer that uses these tools should let the user know that they’re using them. And data should be stored carefully. Users should always have the chance to delete any biometric data. Users also should have the option to sign in to their devices in ways other than using biometrics.

    https://cosmicnext.com/infrastructure-services

  • Investigate two factor security

    Passwords have been used for decades. With passwords, the information and systems the password protects is only as secure as the actual password. A password like 123abc is easy to remember (which is why people use it), but it’s also easy to guess. Easy to guess or crack passwords are insecure. People also use birthdays and favorite colors for passwords. These aren’t secure passwords either. So passwords have gotten a lot of criticism.

    Using fingerprints and faces to authenticate a user is a lot more secure. These methods are being used more. And there’s another way getting more popular.

    The Phone in Your Pocket

    Two reasons why a fingerprint is secure and easy to use are:

    • It’s hard to copy
    • People always have it with them

    Computer scientists realized there’s another thing many people carry around that fits the same bill. When mobile phones became common, scientists figured out a way to use them like fingerprints. Since most people treat their mobile phones like their wallet or purse, they tend to be carefully guarded. People also tend to have them everywhere they go. So using them as a security device became an option.

    Illustration of a mobile phone and a shopping site using the phone for authentication.

    When you set up an account at a streaming service or a bank, you may be asked to provide your mobile phone number. The bank may then send you a text message with a code. You’ll be asked to enter that code on a form to verify you own the phone. Once you do, the bank can then use that same number in the future to make sure that the person who set up the account is the one accessing it.

    The bank may send you a code each time you sign in. They’ll ask for the new code in addition to your password. You now have two items of information to give them. When you provide two pieces of information, it’s called two-factor authentication (or 2FA).

    Other 2FA Options

    Using a mobile phone is just one way of validating you. A bank could also call a landline and ask you to press numbers to verify who you are. If you don’t have a mobile phone, companies can send you an email with a code, and you enter the code from the email.

    Illustration of an email being received.

    There are also apps called “authenticators” that either generate a code or ask you to pick a number from a list to verify your identity. The app works similarly to the text message in that you have to first show that the phone that is using the app is yours. Once you verify it’s your phone, some authenticators ask if you want to approve the sign-in with a simple yes or no.

    Passwords can be combined with any other method of verification (like a fingerprint). Any combination of verification methods counts as 2FA. These days though, the mobile device seems to be the most popular way. Using a code in a text message or an authenticator is very common and gives a level of security that goes well beyond passwords alone.

    https://cosmicnext.com/hrms