Greetings !!!
Looking Urgently !!!
Exp-Min 10 Years
Location-Delhi
Sal-nego
Role
AWS Data Migration Consultant
Provide Data Migration strategy, expert review and guidance on Data Migration from onprem to AWS infrastructure that includes AWS Fargate, PostgreSQL, DynamoDB. This includes review and SME inputs on:
· Data migration plan, architecture, policies, procedures
· Migration testing methodologies
· Data integrity, consistency, resiliency.
· Performance and Scalability
· Capacity planning
· Security, access control, encryption
· DB replication and clustering techniques
· Migration risk mitigation approaches
· Verification and integrity testing, reporting (Record and field level verifications)
· Schema consistency and mapping
· Logging, error recovery
· Dev-test, staging and production artifact promotions and deployment pipelines
· Change management
· Backup, DR approaches and best practices.
Qualifications
- Worked on mid to large scale data migration projects, specifically from on-prem to AWS, preferably in BFSI domain
- Deep expertise in AWS Redshift, PostgreSQL, DynamoDB from data management, performance, scalability and consistency standpoint
- Strong knowledge of AWS Cloud architecture and components, solutions, well architected frameworks
- Expertise in SQL and DB performance related aspects
- Solution Architecture work for enterprise grade BFSI applications
- Successful track record of defining and implementing data migration strategies
- Excellent communication and problem solving skills
- 10+ Yrs experience in Technology, at least 4+yrs in AWS and DBA/DB Management/Migration related work
- Bachelors degree or higher in Engineering or related field
Similar jobs
About TensorIoT
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal-opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
You have:
- Study and transform data science prototypes.
- Design machine-learning systems
- Research and implement appropriate ML algorithms and tools.
- Develop machine-learning applications according to requirements.
- Select appropriate datasets and data representation methods.
- Run machine-learning tests and experiments.
- Perform statistical analysis and fine-tuning using test results.
- Train and retrain systems when necessary.
- Extend existing ML libraries and frameworks.
- Keep abreast of developments in the field.
Machine Learning Engineer responsibilities include:
- Designing and developing machine learning and deep learning systems
- Running machine learning tests and experiments
- Implementing appropriate ML algorithms
Must have/Requirements.
- Proven experience as a Machine Learning Engineer or similar role
- Must have experience with integrating applications and platforms with cloud technologies (i.e., AWS)
- Must have experience with Docker containers.
- Experience with GPU acceleration (i.e., CUDA and cuDNN)
- Create feature engineering pipelines to process high-volume, multi-dimensional, unstructured (audio, video, NLP) data at scale.
- Knowledge of server-less architectures (e.g., Lambda, Kinesis, Glue).
- Understanding of end-to-end ML project lifecycle.
- Must have experience with Data Science tools and frameworks (i.e., Python, Scikit, NLTK, NumPy, Pandas, TensorFlow, Kera’s, R, Spark, PyTorch).
- Experience with cloud-native technologies, microservices design, and REST APIs.
- Knowledge of data query and data processing tools (i.e., SQL)
- Deep knowledge of Math, Probability, Statistics, and Algorithms
- Strong understanding of image recognition & computer vision.
- Must have 4-8 years of experience.
- Excellent communication skills
- Ability to work in a team.
- BSc in Computer Science, Mathematics, or a similar field; a Master’s degree is a plus.
Job Location: Chennai
Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
•
Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Bigdata with cloud:
Experience : 5-10 years
Location : Hyderabad/Chennai
Notice period : 15-20 days Max
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
Role: Teradata Lead
Band: C2
Experience level: Minimum 10 years
Job Description:
This role would be leading the DBA teams of multiple experience level DBAs for a mix of – Teradata, Oracle and SQL.
Skill Set:
Minimum 10 years of relevant Database and Datawarehouse experience.
Hands on experience of administrating Teradata.
Leading the performance analysis, capacity planning and supporting the batchops and users with their jobs.
Drive implementation of standards and best practices to optimize database utilization and availability.
Hands on with AWS Cloud infrastructure services such as EC2, S3 and network services.
Proficient in Linux system administration relevant to Teradata management.
Teradata Specific (Mandatory)
Manage and Operate 24x7 production as well as development databases to ensure maximum availability of system resources.
Responsible for operational activities of a Database Administrator such as System monitoring, User Management, Space Management, Troubleshooting, and Batch/user support.
Perform DBA related tasks in key areas of Performance Management & Reporting, workload management using TASM.
Manage Production/Development databases in areas like Capacity Planning, Performance Monitoring & Tuning, Strategies Defined for Backup/Recovery Techniques, Space/ User/ Security management along With Problem determination and resolution.
Experience with Teradata Workload management & monitoring and query optimization.
Expertise with system monitoring using viewpoint and logs.
Proficient in analysing the performance and optimizing at different levels.
Ability to create advanced system-level capacity reports as well as root cause analysis.
Oracle Specific (Optional)
Database Administration Installation of Oracle software on Unix/Linux platform.
Database Lifecycle Management - Database creation, setup decommissioning.
Database event alert monitoring, space management, user management.
Database upgrades migrations, cloning.
Database backup restore recovery using RMAN.
Setup and maintain High-Availability and Disaster Recovery solutions.
Proficient in Standby and Data Guard technology.
Hands on with the OEM CC.
Mandatory Certification:
- Teradata Vantage Certified Administrator
- ITIL Foundation
One large human need is that of sharing thoughts and connecting with people of the same
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and
Technology Team & Culture
Job Description
Problem Statement-Solution
Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.
One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.
About Koo
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.
Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.
Since June 2021, Koo is available in Nigeria.
Founding Team
Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).
Technology Team & Culture
The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.
Technology skill sets required for a matching profile
- Work experience of 4 to 8 years in building large scale high user traffic consumer facing applications with desire to work in a fast paced startup.
- Development experience of real-time data analytics backend infrastructure on AWS
- Responsible for building data and analytical engineering solutions with standard e2e design & ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality.
- Responsible to enable access of data in AWS S3 storage layer and transformations in Data Warehouse
- Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities.
- Integrate domain data knowledge into development of data requirements.
- Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
We are looking for ETL Developer for Reputed Client @ Coimbatore Permanent role
Work Location : Coimbatore
Experience : 4+ Years
Skills ;
- Talend (or)Strong experience in any of the ETL Tools like (Informatica/Datastage/Talend)
- DB preference (Teradata /Oracle /Sql server )
- Supporting Tools (JIRA/SVN)