11+ Apache Aurora Jobs in India
Apply to 11+ Apache Aurora Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Aurora Jobs and apply today!
Designation: Specialist - Cloud Service Developer (ABL_SS_600)
Position description:
- The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
- Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
- Monitor & Optimize the performance using AWS dashboards and logs
- Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements
- Work with the cloud team in agile approach and develop cost optimized solutions
Primary Responsibilities:
- Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.
Reporting Team
- Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
- Reporting Department: Application Development (2487)
Required Skills:
- AWS certification would be preferred
- Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
- Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services.
- Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
- Good in data structure, programming in (pyspark / python / golang / Scala)
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure
at TSG Global Services Private Limited
Greetings !!!
Looking Urgently !!!
Exp-Min 10 Years
Location-Delhi
Sal-nego
Role
AWS Data Migration Consultant
Provide Data Migration strategy, expert review and guidance on Data Migration from onprem to AWS infrastructure that includes AWS Fargate, PostgreSQL, DynamoDB. This includes review and SME inputs on:
· Data migration plan, architecture, policies, procedures
· Migration testing methodologies
· Data integrity, consistency, resiliency.
· Performance and Scalability
· Capacity planning
· Security, access control, encryption
· DB replication and clustering techniques
· Migration risk mitigation approaches
· Verification and integrity testing, reporting (Record and field level verifications)
· Schema consistency and mapping
· Logging, error recovery
· Dev-test, staging and production artifact promotions and deployment pipelines
· Change management
· Backup, DR approaches and best practices.
Qualifications
- Worked on mid to large scale data migration projects, specifically from on-prem to AWS, preferably in BFSI domain
- Deep expertise in AWS Redshift, PostgreSQL, DynamoDB from data management, performance, scalability and consistency standpoint
- Strong knowledge of AWS Cloud architecture and components, solutions, well architected frameworks
- Expertise in SQL and DB performance related aspects
- Solution Architecture work for enterprise grade BFSI applications
- Successful track record of defining and implementing data migration strategies
- Excellent communication and problem solving skills
- 10+ Yrs experience in Technology, at least 4+yrs in AWS and DBA/DB Management/Migration related work
- Bachelors degree or higher in Engineering or related field
Required Skill Set-
Project experience in any of the following - Data Management,
Database Development, Data Migration or Data Warehousing.
• Expertise in SQL, PL/SQL.
Role and Responsibilities -
• Work on a complex data management program for multi-billion dollar
customer
• Work on customer projects related to data migration, data
integration
•No Troubleshooting
• Execution of data pipelines, perform QA, project documentation for
project deliverables
• Perform data profiling, data cleansing, data analysis for migration
data
• Participate and contribute in project meeting
• Experience in data manipulation using Python preferred
• Proficient in using Excel, PowerPoint
-Perform other tasks as per project requirements.
- You're proficient in AI/Machine learning latest technologies
- You're proficient in GPT-3 based algorithms
- You have a passion for writing code as well as understanding and crafting the ways systems interact
- You believe in the benefits of agile processes and shipping code often
- You are pragmatic and work to coalesce requirements into reasonable solutions that provide value
Responsibilities
- Deploy well-tested, maintainable and scalable software solutions
- Take end-to-end ownership of the technology stack and product
- Collaborate with other engineers to architect scalable technical solutions
- Embrace and improve our standards and processes to reduce friction and unlock efficiency
Current Ecosystem :
ShibaSwap : https://shibaswap.com/#/" target="_blank">https://shibaswap.com/#/
Metaverse : https://shib.io/#/" target="_blank">https://shib.io/#/
NFTs : https://opensea.io/collection/theshiboshis" target="_blank">https://opensea.io/collection/theshiboshis
Game : Shiba Eternity on iOS and Android
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
What we are looking for:
● 3+ years’ experience developing Data & Analytic solutions
● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark
● Experience with relational SQL
● Experience with scripting languages such as Shell, Python
● Experience with source control tools such as GitHub and related dev process
● Experience with workflow scheduling tools such as Airflow
● In-depth knowledge of scalable cloud
● Has a passion for data solutions
● Strong understanding of data structures and algorithms
● Strong understanding of solution and technical design
● Has a strong problem-solving and analytical mindset
● Experience working with Agile Teams.
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
● Able to quickly pick up new programming languages, technologies, and frameworks
● Bachelor’s Degree in computer science
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
Data Scientist
Requirements
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB
We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless. As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines. Duties/Responsibilities Include:
|
Requirements:
Exceptional candidates will have:
|
This person MUST have:
- B.E Computer Science or equivalent
- 5 years experience with the Django framework
- Experience with building APIs (REST or GraphQL)
- Strong Troubleshooting and debugging skills
- React.js knowledge would be an added bonus
- Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
- Sound knowledge of object-oriented design and analysis.
- A strong passion for writing simple, clean and efficient code.
- Proficient understanding of code versioning tools Git.
- Strong communication skills.
Experience:
- Min 5 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes
Required Experience, Skills and Qualifications:
- Hands on experience on Big Data tools/technologies like Spark, Databricks, Map Reduce, Hive, HDFS.
- Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
- Proficiency in any of the programming language: Python/ Scala/ Java with 4+ years’ experience
- Experience in Cloud infrastructures like MS Azure, Data lake etc
- Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills