IBM DB2 DBA
- Expert knowledge of IBM DB2 V11.5 installations, configurations & administration in Linux systems.
- Expert level knowledge in Database restores including redirected restore & backup concepts.
- Excellent understanding of database performance monitoring techniques, fine-tuning, and able to perform performance checks & query optimization
- Good knowledge of utilities like import, load & export under high volume conditions.
- Ability to tune SQLs using db2advisor & db2explain.
- Ability to troubleshoot database issues using db2diag, db2pd, db2dart, db2top tec.
- Administration of database objects.
- Capability to review & assess features or upgrades to existing components.
- Experience in validating security aspects on a confidential database.
- Hands-on experience in SSL communication setup, strong access control, and database hardening.
- Experience in performing productive DB recovery and validating crash recovery.
- Experience in handling incidents & opening DB2 support tickets.
- Experience in deploying a special build DB2 version from DB2 support.
- Worked in environments such as 12x5 supports of production database services.
- Excellent problem-solving skills, analytical skills.
- Validate security aspects on a confidential database
- SSL communication setup, strong access control; database hardening
- Validate Crash recovery
- perform productive DB recovery
- On incidents open db2 support tickets
- Deploy a special build DB2 version from db2 support
Good to have:
- Experience in handling application servers (WebSphere, WebLogic, Jboss, etc.) in highly available Production environments.
- Experience in maintenance, patching, and installing updates on WebSphere Servers in the Production environment.
- Able to handle installation/deployment of the product (JAR/EAR/WAR) independently.
- Knowledge of ITIL concepts (Service operation & transition)
Soft skills:
- Ability to work with the global team (co-located staffing).
- Carries learning attitude, should be an individual contributor and must have excellent communication skills.
- Support: 12/5 support is required (on a rotational basis).
About IT solutions specialized in Apps Lifecycle management. (MG1)
Similar jobs
1+ years of proven experience in ML/AI with Python
Work with the manager through the entire analytical and machine learning model life cycle:
⮚ Define the problem statement
⮚ Build and clean datasets
⮚ Exploratory data analysis
⮚ Feature engineering
⮚ Apply ML algorithms and assess the performance
⮚ Codify for deployment
⮚ Test and troubleshoot the code
⮚ Communicate analysis to stakeholders
Technical Skills
⮚ Proven experience in usage of Python and SQL
⮚ Excellent in programming and statistics
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark
Company Overview:
At Codvo, software and people transformations go hand-in-hand. We are a global empathyled technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.
Required Skills (Technical):
- Advanced knowledge of statistical techniques, NLP, machine learning algorithms and deep learning frameworks like TensorFlow, Theano, Kera’s, Pytorch.
- Proficiency with modern statistical modelling (regression, boosting trees, random forests, etc.), machine learning (text mining, neural network, NLP, etc.), optimization (linear optimization, nonlinear optimization, stochastic optimization, etc.) methodologies.
- Building complex predictive models using ML and DL techniques with production quality code and jointly own complex data science workflows with the Data Engineering team.
- Familiarity with modern data analytics architecture and data engineering technologies (SQL and No-SQL databases).
- Knowledge of REST APIs and Web Services
- Experience with Python, R, sh/bash
Required Skills (Non-Technical):
- Fluent in English Communication (Spoken and verbal)
- Should be a team player
- Should have a learning aptitude
- Detail-oriented, analytically.
- Extremely organized with strong time-management skills
- Problem Solving & Critical Thinking
Role & responsibilities:
- Developing ETL pipelines for data replication
- Analyze, query and manipulate data according to defined business rules and procedures
- Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
- Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
- Resolve internal and external data exceptions in timely and accurate manner
- Improve multi-environment data flow quality, security, and performance
Skills & qualifications:
- Must have experience with:
- virtualization, containers, and orchestration (Docker, Kubernetes)
- creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
- workflow orchestration tools (Argo, Airflow)
- supporting machine learning models in production
- Have a desire to continually keep up with advancements in data engineering practices
- Strong Python programming and exploratory data analysis skills
- Ability to work independently and with team members from different backgrounds
- At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
- 3+ years of work in software/data engineering.
- Superior interpersonal, independent judgment, complex problem-solving skills
- Global orientation, experience working across countries, regions and time zones
Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
● Experience with big data tools: Hive/Hadoop, Spark, Kafka, Hive etc.
● Experience with querying multiple databases SQL/NoSQL, including
Oracle, MySQL and MongoDB etc.
● Experience in Redis, RabbitMQ, Elastic Search is desirable.
● Strong Experience with object-oriented/functional/ scripting languages:
Python(preferred), Core Java, Java Script, Scala, Shell Scripting etc.
● Must have debugging complex code skills, experience on ML/AI
algorithms is a plus.
● Experience in version control tool Git or any is mandatory.
● Experience with AWS cloud services: EC2, EMR, RDS, Redshift, S3
● Experience with stream-processing systems: Storm, Spark-Streaming,
etc
Requirements:
- Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
- Good in Python is must.
- Experience in Apache Spark, Scala, Java and Delta Lake
- Experience in designing and implementing templated ETL/ELT data pipelines
- Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
- Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
- Log management and log monitoring using ELK/Grafana
- Git Hub Integration
Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java
- Bachelor's degree in Computer Science, Engineering, Operations Research, Math, or related discipline.
- Minimum 2+ years of experience as an Analyst role preferred.
- Highly proficient in Microsoft Office and Windows based applications.
- SQL AND Excel Knowledge and Hands-on experience is a must.
- Demonstrated Analytical ability, results-oriented environment with external customer interaction.
- Excellent written and verbal communication and presentation skills and the ability to express thoughts logically and succinctly.
- Entrepreneurial drive and demonstrated ability to achieve stretch goals in an innovative and fast-paced environment.
Preferred Qualifications
- Experience with E-Commerce, Retail and Business Analytics would be an advantage.
- Understanding of data warehousing, data modeling concept and building new DW tables
- Advanced SQL skills, fluent in R and/or Python, advanced Microsoft Office skills, particularly Excel and analytical platforms
Job Description: Project Manager
Internal Role: Delivery Manager
Type: Full time
Location: Bangalore
Internal Role: Delivery Manager
About Dataweave:
About Us
DataWeave provides “Competitive Intelligence as a Service” to Retailers and Consumer brands helping them optimize their offerings through effective pricing, assortment and promotional recommendations.
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the Web. At serious scale!
Read more on http://dataweave.com/about/become-dataweaver">Become a DataWeaver
Job Description:
- Develop a detailed project plan (Ensuring resource availability and allocation) to track progress
- Developing project scopes and objectives, involving all relevant stakeholders and ensuring technical feasibility
- Ensure that all projects are delivered on-time, within scope and within budget
- Coordinate internal resources and third parties/vendors for the flawless execution of projects
- Use appropriate verification techniques to manage risks, changes in project scope, schedule and costs
- Measure project performance using appropriate systems, tools and techniques
- Report and escalate to management as needed
- Establish and maintain relationships with Clients, internal stakeholders, third parties / vendors
- Create and maintain comprehensive project documentation
- Definition of service level agreements (SLA) in relation to services, ensuring the SLA’s are achieved; data sanity and hygiene & client expectations are exceeded
- Effectively monitor, control and support delivery, ensuring standard operating procedures and methodologies are followed
- Create KPIs for monitoring and review and publish health stats on a recurring basis
Key Skills / Knowledge required:
- Proven ability to switch context, manage multiple short projects
- Relevant experience of 3-5 years and overall experience of 8+ yrs
- Exposure to Analytics delivery process, ability to troubleshoot using insights within the data
- Basic Database query skills, with knowledge of Web-crawling concepts is preferred
- Excellent communication skills
- Team management and ability to deliver while working with cross functional teams
- Exposure to Agile Development (Scrum / Kanban) methodology is a plus
CommerceIQ is Hiring Data Scientist (3-5 yrs)
At CommerceIQ, we are building the world’s most sophisticated E-commerce Channel Optimization software to help brands leverage Machine Learning, Analytics and Automation to grow their E-commerce business on all channels, globally.
Using CommerceIQ as a single source of truth, customers have driven 40% increase in incremental sales, 20% improvement in profitability and 32% reduction in out of stock rates on Amazon.
What You’ll Be Doing
As a Senior Data Scientist, you will work closely with Engineering/Product/Operations teams to build state-of-the-art ML based solutions for B2B SaaS products. This entails not only leveraging advanced techniques for predictions, time-series forecasting, topic modelling, optimisation but deep understanding of business and product too.
- Apply excellent problem solving skills to deconstruct and formulate solutions from first-principles
- Work on data science roadmap and build the core engine of our flagship CommerceIQ product
- Collaborate with product and engineering to design product strategy, identify key metrics to drive and support with proof of concept
- Perform rapid prototyping of experimental solutions and develop robust, sustainable and scalable production systems
- Work with large scale ecommerce data of the biggest brands on amazon
- Apply out-of-the-box, advanced algorithms to complex problems in real-time systems
- Drive productization of techniques to be made available to a wide range of customers
- You would be working with and mentoring fellow team members on the owned charter
What we are looking for -
- Bachelor’s or Masters in Computer Science or Maths/Stats from a reputed college with 4+ years of experience in solving data science problems that have driven value to customers
- Good depth and breadth in machine learning (theory and practice), optimization methods, data mining, statistics and linear algebra. Experience in NLP would be an advantage
- Hands-on programming skills and ability to write modular and scalable code in Python/R. Knowledge of SQL is required
- Familiarity with distributed computing architecture like Spark, Map-Reduce paradigm and Hadoop will be an added advantage
- Strong spoken and written communication skills, able to explain complex ideas in a simple, intuitive manner, write/maintain good technical documentation on projects
- Experience with building ML data products in an engineering organization interfacing with other teams and departments to deliver impact
- We are looking for candidates who are curious and self-starters; obsess over customer problems to deliver maximum value to them.
- Data scientist, Machine Learning, data science, data analyst
Job Type: Full-time
Experience:
- Data Scientist: 3 years (Required)
Application Question:
- Looking for product based industry experience from tier 1 /tier 2 colleges (NIT ,BIT, IIT,IIIT, BITS, Strong Profiles)