11+ Microsoft App-V Jobs in Bangalore (Bengaluru) | Microsoft App-V Job openings in Bangalore (Bengaluru)
Apply to 11+ Microsoft App-V Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Microsoft App-V Job opportunities across top companies like Google, Amazon & Adobe.
Hi All,
We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.
Strong Knowledge of SCCM, App V, and Intune infrastructure.
Powershell/VBScript/Python,
Windows Installer
Knowledge of Windows 10 registry
Application Repackaging
Application Sequencing with App-v
Deploying and troubleshooting applications, packages, and Task Sequences.
Security patch deployment and remediation
Windows operating system patching and defender updates
Thanks,
Mohan.G
Azure – Data Engineer
- At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
- Dealing with senior stakeholders/leadership
- Understanding of Azure data security and encryption best practices. [ADFS/ACLs]
Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.
Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.
Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.
Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure
PowerBI – Experience with this is preferred
Additionally
- Experience using GIT as a source control system
- Understanding of DevOps concepts and application
- Understanding of Azure Cloud costs/management and running platforms efficiently
This opening is with an MNC
ROLE AND RESPONSIBILITIES
Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should
be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and
transform data into insights that drive business value, through use of data analytics, data visualization and data
modeling techniques.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
Technical Bachelor’s Degree.
Non-Technical Degree holders should have 1+ years of relevant experience.
Position: ETL Developer
Location: Mumbai
Exp.Level: 4+ Yrs
Required Skills:
* Strong scripting knowledge such as: Python and Shell
* Strong relational database skills especially with DB2/Sybase
* Create high quality and optimized stored procedures and queries
* Strong with scripting language such as Python and Unix / K-Shell
* Strong knowledge base of relational database performance and tuning such as: proper use of indices, database statistics/reorgs, de-normalization concepts.
* Familiar with lifecycle of a trade and flows of data in an investment banking operation is a plus.
* Experienced in Agile development process
* Java Knowledge is a big plus but not essential
* Experience in delivery of metrics / reporting in an enterprise environment (e.g. demonstrated experience in BI tools such as Business Objects, Tableau, report design & delivery) is a plus
* Experience on ETL processes and tools such as Informatica is a plus. Real time message processing experience is a big plus.
* Good team player; Integrity & ownership
Duties and Responsibilities:
Research and Develop Innovative Use Cases, Solutions and Quantitative Models
Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
Provide R&D support to productize research prototypes.
Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
Team Management Skills
Academic Qualification
7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
Innovative and creative in data analysis, problem solving and presentation of solutions.
Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
Strong experience in handling multi-national client engagements
Good verbal, writing & presentation skills
Core Expertise
Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like SQL
Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests.
Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques.
Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
Capability for continuous learning and knowledge acquisition.
Mentor colleagues for growth and success
Strong Software Engineering Background
Hands-on experience with data science tools
Work Timing: 5 Days A Week
Responsibilities include:
• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.
Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Responsibilities:
* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.
* SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries
* Languages: Python
* Good communicator, shows initiative, works well with stakeholders.
* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.
* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL
* Solid communication and articulation skills
* Able to handle stakeholders independently with less interventions of reporting manager.
* Develop strategies to solve problems in logical yet creative ways.
* Create custom reports and presentations accompanied by strong data visualization and storytelling
We would be excited if you have:
* Excellent communication and interpersonal skills
* Ability to meet deadlines and manage project delivery
* Excellent report-writing and presentation skills
* Critical thinking and problem-solving capabilities
What are we looking for:
- Strong experience in MySQL and writing advanced queries
- Strong experience in Bash and Python
- Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
- Exposure to cloud services such as AWS, Azure, or GCP
- 2+ years of experience in the production support
- Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
- Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
- Interest in learning new languages / framework as needed
- Good written and oral communications skills
- A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with
As a product solutions engineer, you will:
- Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
- Work with developers to find and correct more complex issues
- Address urgent issues quickly, work within and measure against customer SLAs
- Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
- Build anomaly detectors wherever applicable
- Pass articulated feedback from customers to the development and product team
- Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
- Offer technical support needed in development
- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 1+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
- strong quantitative and programming skills with a product-driven sensibility
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.
JD
- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.
- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.
- Technical expertise with data models, database design and development, data mining and segmentation techniques
- Proven success in a collaborative, team-oriented environment
- Working experience with geospatial data will be a plus.