Eng Aden

I am a Font-End Developer

Mohamed Mohamud

I, Mohamed Mohamud, from Somalia, currently pursuing B.Tech Computer Science and System Engineering (2020-2024) at Andhra University college of Engineering(A), Visakhapatnam, Andhra Pradesh, India.
I am similarly pursuing Bachelor of Arts(B.A) (2021-2024) at Andhra university school of distance education, Visakhapatnam..

I am dynamic, resourceful, result oriented character with strong sense of motivation driven by desire to achieve set goals and objectives. I have excellent analytical skills, strong sense of personality and good interpersonal skills adapted to create team spirit among colleagues to collectively face and successfully accomplish assigned duties and responsibilities.

  • D.No.2-10-26/10, Sri Sai Residency, ,Near sai baba tamble,M.V.P Colony, sector 9,Visakhapatnam,Andhra pradesh -530017
  • +919347751357
  • ugaska1717@gmail.com
  • engaden.com
Me

My Professional Skills

I am skillful with the following; Fullstack Developer, Mobile App developer, Graphic Designer , Database admi, Operating System,Cloud Computing, programming languages, and more.

Web Design 90%
Web Development 85%
App Development 95%
Graphic Design 88%
Cloud Computing 70%
Operating Sytem 75%
Database Adminstrator 86%
Wordpress 80%

Awesome features

Openness,Creativity,Cultural Experience,Positivity,Commitment,integrity,Team Spirt and Community Services.

Animated elements

Data collection on all assigned subject in the assigned enumeration areas.Identify study sites in consultation with the supervisor.

Responsive Design

Creating highly responsive Software and manipulating it , to be attractive and supportive in a good environment.

Modern design

I have good enough experience with,Fullstack Developemt, Operating System,Cloud Computing, programming languages, System programming.

Retina ready

mostly focusing on digital professionals, whether designers, programmers or photographers and programming bootcamps.

Fast support

Personable and knowledgeable IT support technician with over 2 years of experience assisting customers with various hardware and software related issues.

0
completed project
0
design award
0
facebook like
0
current projects
  • ARTIFICIAL INTELLIGENCE:

    ARTIFICIAL INTELLIGENCE:

     What is artificial intelligence?

    While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper (PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

    However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB) (link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?"  From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

    Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

    Human approach:

    • Systems that think like humans
    • Systems that act like humans

    Ideal approach:

    • Systems that think rationally
    • Systems that act rationally

    Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

    At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

    Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartner’s hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow “a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes here (01:08:05) (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.  

    As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

    Types of artificial intelligence—weak AI vs. strong AI

    Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

    Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

    Deep learning vs. machine learning

    Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

    Deep learning is actually comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

    The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

    "Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

    Artificial intelligence applications

    There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

    • Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—e.g. Siri—or provide more accessibility around texting. 
    • Customer service:  Online virtual agents are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.
    • Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.  
    • Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.
    • Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

     

    History of artificial intelligence: Key dates and names

    The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

    • 1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi's ENIGMA code during WWII—proposes to answer the question 'can machines think?' and introduces the Turing Test to determine if a computer can demonstrate the same intelligence (or the results of the same intelligence) as a human. The value of the Turing test has been debated ever since.
    • 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
    • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that 'learned' though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
    • 1980s: Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
    • 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
    • 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
    • 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
    • 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.


  • MAN AND MACHINE

    MAN AND MACHINE

      


    AI has begun to perform lie detection, play complex games like “Go”, diagnose diseases and even create art. But IMD Professor of Leadership and Organizational Behavior, Jennifer Jordan, and industry experts agree that it isn’t time for us to pack up just yet.



    Here are the top five things that executives must consider when implementing AI in their organization:

    1. Understand what AI is, and what it isn’t

    AI is a tool for identifying patterns in data that would be either an impossible or inefficient task for a human. The goal is that the analysis of insights generated from these data patterns would allow computers to take on “human-like” reasoning capabilities.

    “AI always seems like magic to people who don’t know much about it,” says Pedro Bados, Co-Founder and CEO of Nexthink. “It’s a great tool to save time and automate in systems that can be concretely defined.”

    AI is not, however, a substitute for human intelligence. As new data is generated, an AI (artificial intelligence program built on an algorithm) can help better inform the decision-making process, but not lead it altogether.

    “AI can help humans deal with hypercomplex systems,” says Professor Jordan.

    1. Machines often have an advantage

    Novartis CDO, Bertrand Bodson, is aware of profound implications of AI on the healthcare industry. For certain illnesses that require visual diagnoses, he says machines may have an advantage.

    “In the case of diabetic retinopathy, an ophthalmologist needs 20 years of training to properly identify it,” says Bodson. “Machines are able to do it in a way that is even more optimized, reducing false negatives and positives.”

    AI has also helped to speed the diagnosis of leprosy in India, where the disease affects more than 200,000 people per year.

    “Now there is simple AI used with a mobile phone applied to the skin,” explains Bodson. “This can help determine whether a person is at risk and must seek treatment.”

    1. Humans have strong domain-changing abilities – AI doesn’t

    Humans possess unique cognitive qualities that machines simply cannot yet duplicate. Unlike AI, we are good at domain-switching—rather than being built to do only one specific thing, says Professor Jordan: “Humans can play golf, have a meaningful conversation and then make a movie.”

    Artificial General Intelligence (or the ability for machines to cross domains) is unlikely to appear  in the near future, but this is not only because of the current “simplicity” of AI. It is also because there is no strong pull from industry to create such domain-jumping technology.

    1. Data storage is one thing, data management is another

    The proliferation of data over the past decade has changed the face of data science and brought new challenges in data management. As data storage falls in cost, companies are generating – and saving – more data than ever before.

    “All the vast data we have now can only be analyzed with AI,” says CTO Thomas Gresch – whose company, TX Group, is the largest Swiss publisher. “It would be overwhelming for humans.”

    TX Group uses AI to extract the core message of a text, which is particularly helpful for filtering the online comments to their content.

    “Comments that used to go online after hours can now all be approved for faster publication with machine learning,” says Gresch.

    1. AI will be democratized, allowing everyone to harness its power

    Humans still have a vital role to play in gathering and interpreting information.

    While today’s data scientists have years of specialized training to code complex algorithms, this may no longer be necessary in the future for the casual consumer.

    “For me, the Holy Grail is the notion of the citizen data scientist,” confides Bodson. “We are trying to do with AI what Microsoft did with Excel: democratize the technology and open it up for everyone.”

    This could take the form of computer software that serves as an AI interface. Much like the development of Microsoft Excel put spreadsheets and calculations into the hands of people with limited accounting experience, an AI program could do the same with machine learning.

    In conclusion, all three experts and Professor Jordan agree that we, as leaders, should not look at it as AI versus the human, but rather AI plus the human. Even the best technology still requires the general intelligence of the human leader to add industry and societal value.

  • home

    GET A FREE QUOTE NOW

    Optimism is essential to achievement and it is also the foundation of courage and true progress..

    ADDRESS

    4759, NY 10011 Abia Martin Drive, Huston

    ADDRESS

    D.No.2-10-26/10, Sri Sai Residency, ,Near sai baba tamble,M.V.P Colony, sector 9,Visakhapatnam,Andhra pradesh -530017

    EMAIL

    ugaska1717@gmail.com
    mohayare12@gmail.com

    TELEPHONE

    +91 9347751357

    MOBILE

    +91 9347751357