About Simplilearn Solutions
Simplilearn is the most popular online boot camp for the teaching of digital skills, and it assists students in learning the abilities necessary to flourish in today's digital economy. They provide intensive training in a range of areas, such as data science, cloud computing, cyber security, digital marketing, and project management, all of which may be completed online. To put it another way, Simplilearn focuses its efforts on niches that are characterized by rapidly advancing technology and standards of practice, as well as a significant disparity between the demand for and supply of qualified individuals.
Simplilearn provides a wide range of comprehensive certification programs, individual courses, and partnerships with some of the most prestigious universities in the world. Through these offerings, the company assists millions of professionals in developing the work-ready skills they need to excel in their careers, as well as thousands of organizations in meeting the employee upskilling and corporate training needs of their businesses. 85 percent of Simplilearn's learners have either advanced in their current jobs or found new ones as a direct result of the program's hands-on, practical approach.
Similar jobs
Lightning Job By Cutshort⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Roles & Responsibilities
Basic Qualifications:
● The position requires a four-year degree from an accredited college or university.
● Three years of data engineering / AWS Architecture and security experience.
Top candidates will also have:
Proven/Strong understanding and/or experience in many of the following:-
● Experience designing Scalable AWS architecture.
● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).
● Ability to develop SQL structures that support high volumes and scalability using
RDBMS such as SQL Server, MySQL, Aurora, etc.
● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse
● Experience in creating Network Architecture for secured scalable solution.
● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.
● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.
● Knowledge of building Disaster management systems and security logs notification system
● Knowledge of building scalable microservice architectures with AWS.
● To create a framework for monthly security checks and wide knowledge on AWS services
● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.
● ML/ AI model deployment and production maintainanace experience is mandatory.
● Experience with API tools such as REST, Swagger, Postman and Assertible.
● Versioning management tools such as github, bitbucket, GitLab.
● Debugging and maintaining software in Linux or Unix platforms.
● Test driven development
● Experience building transactional databases.
● Python, PySpark programming experience .
● Must experience engineering solutions in AWS.
● Working AWS experience, AWS certification is required prior to hiring
● Working in Agile Framework/Kanban Framework
● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.
● Passion for technology and an eagerness to contribute to a team-oriented environment.
● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.
● Bachelor’s degree in Computer science or Electrical engineering or related field is required
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
This profile will include the following responsibilities:
- Develop Parsers for XML and JSON Data sources/feeds
- Write Automation Scripts for product development
- Build API Integrations for 3rd Party product integration
- Perform Data Analysis
- Research on Machine learning algorithms
- Understand AWS cloud architecture and work with 3 party vendors for deployments
- Resolve issues in AWS environmentWe are looking for candidates with:
Qualification: BE/BTech/Bsc-IT/MCA
Programming Language: Python
Web Development: Basic understanding of Web Development. Working knowledge of Python Flask is desirable
Database & Platform: AWS/Docker/MySQL/MongoDB
Basic Understanding of Machine Learning Models & AWS Fundamentals is recommended.
-5+ years hands on experience with penetration testing would be added plus
-Strong Knowledge of programming or scripting languages, such as Python, PowerShell, Bash
-Industry certifications like OSCP and AWS are highly desired for this role
-Well-rounded knowledge in security tools, software and processes
- Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
- Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
- Developing API services to provide data as a service
- Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
- Implementing automated Audit & Quality assurance Checks in Data Pipeline
- Document & maintain data lineage from various sources to enable data governance
- Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
Skills
- Programming experience using Python & SQL
- Extensive working experience in Data Engineering projects, using AWS Kinesys, AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
- Experience & expertise in implementing complex data pipeline
- Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
- Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
- Good analytical skills with the ability to synthesize data to design and deliver meaningful information
- Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
- Ability to understand business functionality, processes, and flows
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently
Functional knowledge
- Real-time Event Processing
- Data Governance & Quality assurance
- Containerized deployment
- Linux
- Unstructured Data Processing
- AWS Toolsets for Storage & Processing
- Data Security
Required skill
- Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
- Extensive experience in designing large scale machine learning solution for the ML use case, large scale deployments and establishing continues automated improvement / retraining framework.
- Strong experience in Python and Java is required.
- Hands on experience on Scikit-learn, Pandas, NLTK
- Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
- Experience in Regression, Clustering, classification algorithms
- Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
- Experience in AutoML like TPOT or other
- Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch
- Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
- Understanding of Financial domain is desirable skill
Responsibilities
- Design and implementation of solutions for ML Use cases
- Productionize System and Maintain those
- Lead and implement data acquisition process for ML work
- Learn new methods and model quickly and utilize those in solving use cases