What roles and responsibilities will be performed by the selected candidate?
§ Should be able to handle all ETL Datawarehouse testing phases (Azure environment).
§ Should be able to test complex SQL scripts including Spark SQL
§ Should be able to apply business and functional knowledge including testing standards, guidelines, and testing
methodology to meet the teams overall test objectives.
§ Should be able to identify business requirements including data sources, target systems, transformations
required, business rules required to be applied and existing data model from the mapping document.
§ Define Testing strategy, Test approach, Test suites and Test cases.
§ Run, test and debug ETL jobs (Azure environment).
§ Identify and track defects to closure and keep defect log.
§ Should be able to produce test result documentation that are clear and concise along with documentation of
tests performed, test coverage, test risks, assumptions, issues, and dependencies.
§ Bringing in industry best practices in ETL Testing.
§ Should be able to test complex SQL scripts including Spark SQL
§ Should be able to test pandas data frame ETL transformations.
What is the expectation from the candidate’s current role/profile?
§ 2-5 years of experience in core ETL testing expertise with strong exposure to agile methodology.
§ Ability to apply business and functional knowledge, to define testing strategy, test approach and test case
design.
§ Should have excellent SQL skills
§ Should have worked on ETL project involving multiple layers/stages of database processing with various kinds
of sources/targets like files, Oracle, SQL Server, Webservices. Delta Lake etc…
§ Should have at least intermediate level knowledge of python scripting and pandas (Python data analysis
library).
§ Knowledge of Software Development Lifecycle including the functional and non-functional test phases
§ Good interpersonal, communication and organizational skills
§ The ability to work and team effectively with team and management personnel across
About Nuvento Systems
Similar jobs
heads to solve complex business problems
- Develop statistical, and machine learning-based models/pipelines/methods to improve business
processes and engagements
- Conduct sophisticated data mining analyses of large volumes of data and build data science
models, as required, as part of the credit and risk underwriting solutions; customer engagement and
retention; new business initiatives; business process improvements
- Translate data mining results into a clear business-focused deliverable for decisionmakers
- Working with Application Developers on integrating machine learning algorithms and data mining
models into operational systems so it could lead to automation, productivity increase, and time
savings
- Provide the technical direction required to resolve complex issues to ensure the on-time delivery of
solutions that meet the business team’s expectations. May need to develop new methods to apply
to situations
- Knowledge of how to leverage statistical models in algorithms is a must
- Experience in multivariate analysis; identifying how several parameters can affect
retention/behaviour of the customer and identifying actions at different points of the customer lifecycle
Extensive experience coding in Python and having mentored teams to learn the same
- Great understanding of the data science landscape and what tools to leverage for different
problems
- A great structured thinker that could bring structure to any data science problem quickly
- Ability to visualize data stories and adept in data visualization tools and present insights as cohesive
stories to senior leadership
- Excellent capability to organize large data sets collected from many sources (web APIs and internal
databases) to get actionable insights
- Initiate data science programs in the team and collaborate across other data science teams to build
a knowledge database
Power BI Engineer
Company Overview:
At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.
We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.
What we're looking for :
- You should have strong technical and analytical skill with more in SQL Server, reporting tools like Tableau, Power BI, SSRS and .Net.
- You have experience in OLAP, UI/UX and Dashboard building.
- You should have experience for proper understanding of the project deliverables.
- You should possess excellent communication skills.
Responsibilities:
- You will be responsible for the respective tasks assigned in the project.
- You will be responsible for the deliverable with proper quality, in planned time and cost adhering to the industry standards that will be defined for the project.
- You will be involved in client interaction.
- 5+ years of experience building real-time and distributed system architecture, from whiteboard to production
- Strong programming skills in Python, Scala and SQL.
- Versatility. Experience across the entire spectrum of data engineering, including:
- Data stores (e.g., AWS RDS, AWS Athena, AWS Aurora, AWS Redshift)
- Data pipeline and workflow orchestration tools (e.g., Azkaban, Airflow)
- Data processing technologies (e.g., Spark, Pentaho)
- Deployment and monitoring large database clusters in public cloud platforms (e.g., Docker, Terraform, Datadog)
- Creating ETL or ELT pipelines that transform and process petabytes of structured and unstructured data in real-time
- Industry experience building and productionizing innovative end-to-end Machine Learning systems is a plus.
Position description:
- Architect and own the report automation framework using GCP’s Bigquery and any scripting language (R/Python)
- Work on the enhancement of existing and new analysis on Tableau
- Work closely with the existing team and mentor them
- Work on Ad-hoc analysis
Primary Responsibilities:
- Architect and own the report automation framework using GCP’s Bigquery and any scripting language (R/Python)
Reporting Team
- Reporting Designation: Data Science Analyst
- Reporting Department: Digital Analytics BI (2511)
Required Skills:
- Hands-on experience in relevant tools like SQL(expert), Excel, R/Python, Tableau/PowerBI
- Advanced ability to draw insights from data and clearly communicate them to the stakeholders and senior management as required
Experience : 3 to 7 Years
Number of Positions : 20
Job Location : Hyderabad
Notice : 30 Days
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
Hadoop and Hive requirements as good to have or understanding of is enough.
- Editorialist YX is looking for a Technical Architect - Search. As part of this role, you will work with a team that builds a unified search platform to power various searches for our .com website,IOS app, and internal support tools. This search impacts thousands of customers in a day and will also become pivotal to our tech efforts as we continue to grow 30x YoY.
- You will own the technical direction for the team, and you will be leading key search projects from ideation all the way to deployment. You will be working closely with both technical and business leaders to fulfill your mission.
- Salary is no bar for the relevant candidate.
QUALIFICATION
- 6+ years of experience working in java and web services.
- 6+ years of experience working in the Search domain.
- Proven skills in designing scalable, highly available distributed systems which can handle high data volumes.
- Strong understanding of software engineering principles and fundamentals including data structures and algorithms.
- Solid understanding of concurrency and multi-threading, multiple design patterns, and debugging and analytical methodologies.
- Hands-on experience on Solr Cloud or ElasticSearch.
- Deep understanding of information retrieval concepts.
- Deep understanding of Linguistic processing like tokenizers, spellers, and stemmers.
- Hands-on experience on big data tech stacks, like Hadoop, Hive, Cassandra, and Spark is a plus.
- Self-directed, self-motivated, and detail-oriented with the ability to come up with good design proposals and thorough analysis of production issues.
- Excellent written and oral communication skills on both technical and non-technical topics.
RESPONSIBILITIES
- Designing and building a search engine using elastic search with engineers in the team for overall success for Search and other ML-based systems.
- Collaborate with peers from other Engineering groups to tackle complex and meaningful problems with efficient and scalable delivery of Search solutions.
- You are expected to be self-motivated, dedicated, and a solution-oriented individual. The main responsibilities for this position include:Leading effort to build large-scale, distributed, and highly available systems and pipelines.
- Leading effort to build large scale and highly available information retrieval systems
- Design and develop solutions using Java tech stack.
- Design and implement as per secure guidelines
- Work with QA to identify issues and fix them.
EDUCATION & EXPERIENCE
- B.Tech. in Computer Science or equivalent experience
- 6+ Yrs of experience in Java, Web services & Search Domain
- Experience working in Product based company
BENEFITS
- Retiral Benefits
- Medical Insurance
- Remote Working Opportunity for the time being
- MacBook
- Stock Options
- Gym Membership
Responsibilities:
- Improve robustness of Leena AI current NLP stack
- Increase zero shot learning capability of Leena AI current NLP stack
- Opportunity to add/build new NLP architectures based on requirements
- Manage End to End lifecycle of the data in the system till it achieves more than 90% accuracy
- Manage a NLP team
Page BreakRequirements:
- Strong understanding of linear algebra, optimisation, probability, statistics
- Experience in the data science methodology from exploratory data analysis, feature engineering, model selection, deployment of the model at scale and model evaluation
- Experience in deploying NLP architectures in production
- Understanding of latest NLP architectures like transformers is good to have
- Experience in adversarial attacks/robustness of DNN is good to have
- Experience with Python Web Framework (Django), Analytics and Machine Learning frameworks like Tensorflow/Keras/Pytorch.
ML ARCHITECT
Job Overview
We are looking for a ML Architect to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. They must have strong experience using variety of data mining and data analysis methods, building and implementing models, using/creating algorithm’s and creating/running simulations. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Automating to identify the textual data with their properties and structure form various type of document.
Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Creating automated anomaly detection systems and constant tracking of its performance
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Secure and manage when needed GPU cluster resources for events
- Write comprehensive internal feedback reports and find opportunities for improvements
- Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model.
Skills and Qualifications
- Strong Hands-on experience in Python Programming
- Working experience with Computer Vision models - Object Detection Model, Image Classification
- Good experience in feature extraction, feature selection techniques and transfer learning
- Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
- Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
- Good experience in exploratory data analysis, data visualisation, and other data preprocessing techniques.
- Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe
- Good knowledge in statistics,distribution of data and in supervised and unsupervised machine learning algorithms.
- Exposure to OpenCV Familiarity with GPUs + CUDA
- Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
- We are looking for a candidate with 14+ years of experience, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
- Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.
Top MNC looking for candidates on Business Analytics(4-8 Years Experience).
Requirement :
- Experience in metric development & Business analytics
- High Data Skill Proficiency/Statistical Skills
- Tools: R, SQL, Python, Advanced Excel
- Good verbal/communication Skills
- Supply Chain domain knowledge
*Job Summary*
Duration: 6months contract based at Hyderabad
Availability: 1 week/Immediate
Qualification: Graduate/PG from Reputed University
*Key Skills*
R, SQL, Advanced Excel, Python
*Required Experience and Qualifications*
5 to 8 years of Business Analytics experience.