About Computer Power Group Pvt Ltd
Similar jobs
Data Scientist (Kofax Accredited Developers)
at A global business process management company
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
a. 4+ years of experience in Azure development using PySpark (Databricks) and Synapse.
b. Real world project experience in using ADF to bring in data from on-premise applications into Azure using ADF pipelines.
c. Strong working experience on transforming data using PySpark on Databricks.
d. Experience with Synapse database and transformations within Synapse
e. Strong knowledge of SQL.
f. Experience in working with multiple kinds of source systems (e.g. HANA, Teradata, MS SQL Server, flat files, JSON, etc.)
g. Strong communication skills.
h. Experience in working on Agile
THE ROLE:Sr. Cloud Data Infrastructure Engineer
As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.
ROLES & RESPONSIBILITIES:
- Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
- Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
- Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
- Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
- Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
- Data processing/transformation using various technologies such as spark and cloud Services.
- Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.
Data Infrastructure Engineer Strategy Objectives: End to End Strategy
Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.
- Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
- Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
- Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
- Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
- Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
- Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
- Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
- Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
- Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.
KEY REQUIREMENTS:
- 10+ years’ experience as data engineer.
- Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
- This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
- Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
- Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
- Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
- Nice to have Databricks administration including security and infrastructure features of Databricks.
- Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
The Company
We are a young, fast-growing AI company shaking up how work gets done across the enterprise. Every day, we help clients identify opportunities for automation, and then use a variety of AI and advanced automation techniques to rapidly model manual work in the form of code. Our impact has already been felt across some of the most reputable Fortune 500 companies, who are consequently seeing major gains in efficiency, client satisfaction, and overall savings. It’s an exciting experience to watch companies transform themselves rapidly with Soroco!
Based across US, UK, and India, our team includes several PhDs and graduates from top-notch universities such as MIT, Harvard, Carnegie Mellon, Dartmouth, and top rankers/medalists from the IITs and NITs. The senior leadership includes a former founder of a VC/hedge fund, a computer scientist from Harvard, and a former founder of a successful digital media firm. Our team has collectively published more than 100 papers in international journals and conferences and been granted over 20 patents. Our board members include some of the most well-known entrepreneurs across the globe, and our early clients include some of the most innovative Fortune 100 companies.
The Role
As an individual contributor role, Business Analyst (BA) will work closely with Data Science Manager in India. BAs will be primarily responsible for analyzing improvement opportunities with business process, people productivity, application usage experience and other advanced analytics projects using Soroco scout platform collected data, for clients from diverse industry.
Responsibilities include (but are not limited to):
- Understanding project objectives and frame analytics approach to provide the solution.
- Take ownership in extracting, cleansing, structuring & analyzing data
- Analyze data using statistical or rule-based techniques to identify actionable insights.
- Prepare PowerPoint presentation/build visualization solutions for presenting the analysis & actionable insights to client.
- Brainstorm and perform root cause analysis to provide suggestions to improve scout platform.
- Work closely with product managers to build analytical features in the product.
- Manage multiple projects simultaneously, in a fast-paced setting
- Communicate effectively with client engagement, product, and engineering teams
The Candidate
An ideal BA should be passionate and entrepreneurial in nature, with a flexible attitude to learn anything and a willingness to provide the highest level of professional service.
- 2-4 years of analytics work experience with a University degree in Engineering, preferably from Tier-1 or Tier-2 colleges.
- Possess the skill to creatively solve analytical problems and propose solutions.
- Ability to perform data manipulation and data modeling with complex data using SQL/Python
- Knowledge of statistics and experience using statistical packages for analyzing datasets (R/Python)
- Proficiency in Microsoft Office Excel and PowerPoint.
- Impeccable attention to detail with excellent prioritization skills
- Effective verbal, written and interpersonal communication skills.
- Must be a team player and able to build strong working relationships with stakeholders
- Strong capabilities and experience with programming in Python (Numpy & Pandas)
Bonus Skills:
- Knowledge of machine learning techniques (clustering, classification, and sequencing, among others)
- Experience with visualization tools like Tableau, PowerBI, Qlik.
How You Will Grow:
Soroco believes in supporting you and your career. We will encourage you to grow by providing you with professional development opportunities across multiple business functions. Joining a young company will allow you to explore what is possible and have a high impact
Senior Software Engineer (Architect), Data
at Uber
Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities:
- Demonstrate tech expertise › Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions.
- Execute large scale projects › Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
- Collaborate across teams › Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
- Coach engineers › Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
- Tech leadership › Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.
What You’ll Do
- Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
- Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data.
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.
What You’ll Need
- 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
- Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
- Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
- Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
- Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
- A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
- Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.